Feb 19 00:09:00 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 00:09:01 crc kubenswrapper[5108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:09:01 crc kubenswrapper[5108]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 00:09:01 crc kubenswrapper[5108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:09:01 crc kubenswrapper[5108]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:09:01 crc kubenswrapper[5108]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 19 00:09:01 crc kubenswrapper[5108]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.491194 5108 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500113 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500182 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500188 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500193 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500198 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500202 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500210 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500216 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500221 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500226 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500230 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500234 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500238 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500242 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500246 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500249 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500257 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500262 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500265 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500269 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500274 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500278 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500282 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500292 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500297 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500300 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500304 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500308 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500313 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500317 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500321 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500324 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500330 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500339 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500344 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500348 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500351 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500355 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500361 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500367 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500371 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500397 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500402 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500405 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500409 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500412 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500418 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500422 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500425 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500428 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500433 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500437 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500443 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500447 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500455 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500459 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500463 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500467 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500471 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500475 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500479 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500483 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500487 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500491 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500495 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500500 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500504 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500508 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500511 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500514 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500517 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500521 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500525 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500532 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500543 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500547 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500550 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500554 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500557 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500560 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500564 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500567 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500571 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500575 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500578 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.500581 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501917 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501926 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501930 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501950 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501954 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501959 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501962 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501966 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501969 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501973 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501976 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501980 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501983 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501987 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501990 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501993 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.501997 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502000 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502005 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502009 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502014 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502017 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502021 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502024 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502027 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502031 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502034 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502037 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502041 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502046 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502049 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502052 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502056 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502059 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502063 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502066 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502070 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502074 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502077 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502080 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502084 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502087 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502091 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502094 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502098 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502101 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502104 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502108 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502111 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502114 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502118 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502121 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502130 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502135 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502142 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502147 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502151 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502171 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502176 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502181 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502185 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502189 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502193 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502197 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502201 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502204 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502208 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502211 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502214 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502218 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502221 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502224 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502228 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502231 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502237 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502241 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502244 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502247 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502251 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502254 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502258 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502261 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502264 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502267 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502273 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.502277 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503108 5108 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503123 5108 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503132 5108 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503147 5108 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503153 5108 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503160 5108 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503166 5108 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503173 5108 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503178 5108 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503183 5108 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503187 5108 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503192 5108 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503196 5108 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503200 5108 flags.go:64] FLAG: --cgroup-root="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503205 5108 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503211 5108 flags.go:64] FLAG: --client-ca-file="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503215 5108 flags.go:64] FLAG: --cloud-config="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503220 5108 flags.go:64] FLAG: --cloud-provider="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503224 5108 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503234 5108 flags.go:64] FLAG: --cluster-domain="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503241 5108 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503246 5108 flags.go:64] FLAG: --config-dir="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503251 5108 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503257 5108 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503264 5108 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503269 5108 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503275 5108 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503280 5108 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503284 5108 flags.go:64] FLAG: --contention-profiling="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503288 5108 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503294 5108 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503298 5108 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503302 5108 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503310 5108 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503314 5108 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503318 5108 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503321 5108 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503325 5108 flags.go:64] FLAG: --enable-server="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503329 5108 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503335 5108 flags.go:64] FLAG: --event-burst="100" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503339 5108 flags.go:64] FLAG: --event-qps="50" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503343 5108 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503347 5108 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503351 5108 flags.go:64] FLAG: --eviction-hard="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503357 5108 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503361 5108 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503365 5108 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503369 5108 flags.go:64] FLAG: --eviction-soft="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503373 5108 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503377 5108 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503382 5108 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503386 5108 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503392 5108 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503396 5108 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503401 5108 flags.go:64] FLAG: --feature-gates="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503406 5108 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503410 5108 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503414 5108 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503419 5108 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503423 5108 flags.go:64] FLAG: --healthz-port="10248" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503427 5108 flags.go:64] FLAG: --help="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503431 5108 flags.go:64] FLAG: --hostname-override="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503436 5108 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503440 5108 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503444 5108 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503448 5108 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503453 5108 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503457 5108 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503461 5108 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503464 5108 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503468 5108 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503473 5108 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503477 5108 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503481 5108 flags.go:64] FLAG: --kube-reserved="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503485 5108 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503488 5108 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503492 5108 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503496 5108 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503500 5108 flags.go:64] FLAG: --lock-file="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503504 5108 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503509 5108 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503513 5108 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503520 5108 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503524 5108 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503530 5108 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503534 5108 flags.go:64] FLAG: --logging-format="text" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503538 5108 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503543 5108 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503546 5108 flags.go:64] FLAG: --manifest-url="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503550 5108 flags.go:64] FLAG: --manifest-url-header="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503556 5108 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503560 5108 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503566 5108 flags.go:64] FLAG: --max-pods="110" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503570 5108 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503574 5108 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503577 5108 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503581 5108 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503586 5108 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503589 5108 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503595 5108 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503607 5108 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503611 5108 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503615 5108 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503619 5108 flags.go:64] FLAG: --pod-cidr="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503623 5108 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503632 5108 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503635 5108 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503639 5108 flags.go:64] FLAG: --pods-per-core="0" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503643 5108 flags.go:64] FLAG: --port="10250" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503648 5108 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503652 5108 flags.go:64] FLAG: --provider-id="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503656 5108 flags.go:64] FLAG: --qos-reserved="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503660 5108 flags.go:64] FLAG: --read-only-port="10255" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503664 5108 flags.go:64] FLAG: --register-node="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503668 5108 flags.go:64] FLAG: --register-schedulable="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503671 5108 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503680 5108 flags.go:64] FLAG: --registry-burst="10" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503687 5108 flags.go:64] FLAG: --registry-qps="5" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503691 5108 flags.go:64] FLAG: --reserved-cpus="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503695 5108 flags.go:64] FLAG: --reserved-memory="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503700 5108 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503704 5108 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503708 5108 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503713 5108 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503717 5108 flags.go:64] FLAG: --runonce="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503721 5108 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503725 5108 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503729 5108 flags.go:64] FLAG: --seccomp-default="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503735 5108 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503739 5108 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503744 5108 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503748 5108 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503753 5108 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503757 5108 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503761 5108 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503765 5108 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503770 5108 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503774 5108 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503778 5108 flags.go:64] FLAG: --system-cgroups="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503782 5108 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503789 5108 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503793 5108 flags.go:64] FLAG: --tls-cert-file="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503797 5108 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503802 5108 flags.go:64] FLAG: --tls-min-version="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503806 5108 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503810 5108 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503814 5108 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503818 5108 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503822 5108 flags.go:64] FLAG: --v="2" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503830 5108 flags.go:64] FLAG: --version="false" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503837 5108 flags.go:64] FLAG: --vmodule="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503843 5108 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.503847 5108 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504028 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504035 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504038 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504042 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504046 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504049 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504052 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504059 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504062 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504066 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504069 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504073 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504077 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504081 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504084 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504087 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504091 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504094 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504097 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504100 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504104 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504107 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504110 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504114 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504117 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504120 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504124 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504127 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504136 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504141 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504144 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504148 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504152 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504155 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504158 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504162 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504165 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504168 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504171 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504177 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504181 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504184 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504188 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504191 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504194 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504198 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504201 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504205 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504208 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504212 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504215 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504218 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504222 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504225 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504229 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504232 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504235 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504239 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504243 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504246 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504251 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504254 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504258 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504261 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504265 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504268 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504271 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504274 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504278 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504281 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504284 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504289 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504293 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504296 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504301 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504305 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504308 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504311 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504315 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504319 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504322 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504325 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504329 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504332 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504515 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.504518 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.505767 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.519874 5108 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.519916 5108 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520077 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520098 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520111 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520121 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520132 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520142 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520152 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520163 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520173 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520182 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520191 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520200 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520210 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520219 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520228 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520237 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520247 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520256 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520266 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520275 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520284 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520293 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520302 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520310 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520319 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520330 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520345 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520355 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520366 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520375 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520386 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520971 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.520991 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521003 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521012 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521021 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521030 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521040 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521050 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521060 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521069 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521078 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521088 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521097 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521106 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521115 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521125 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521134 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521144 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521154 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521162 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521171 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521180 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521190 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521199 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521210 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521219 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521228 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521238 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521247 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521256 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521265 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521276 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521286 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521299 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521308 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521317 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521326 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521335 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521344 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521354 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521363 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521372 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521381 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521390 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521400 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521410 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521418 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521427 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521436 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521449 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521462 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521471 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521480 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521489 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521498 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.521515 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521795 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521815 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521827 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521838 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521848 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521860 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521869 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521880 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521890 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521900 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521912 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521926 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521971 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521984 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.521994 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522005 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522016 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522026 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522036 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522046 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522056 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522067 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522076 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522086 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522096 5108 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522107 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522116 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522125 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522133 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522176 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522186 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522196 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522205 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522214 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522224 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522233 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522242 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522251 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522260 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522270 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522279 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522288 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522297 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522309 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522320 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522330 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522338 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522347 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522356 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522365 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522374 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522383 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522392 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522404 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522416 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522425 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522434 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522442 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522451 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522460 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522470 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522479 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522487 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522496 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522504 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522513 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522523 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522531 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522542 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522551 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522560 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522569 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522578 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522587 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522596 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522606 5108 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522617 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522626 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522635 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522645 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522654 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522664 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522673 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522681 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522690 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:01 crc kubenswrapper[5108]: W0219 00:09:01.522699 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.522715 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.523722 5108 server.go:962] "Client rotation is on, will bootstrap in background" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.531219 5108 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.536072 5108 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.536339 5108 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.537820 5108 server.go:1019] "Starting client certificate rotation" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.538005 5108 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.538108 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.568576 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.575274 5108 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.575703 5108 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.595913 5108 log.go:25] "Validated CRI v1 runtime API" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.648586 5108 log.go:25] "Validated CRI v1 image API" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.652557 5108 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.659776 5108 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-19-00-02-53-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.659839 5108 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.692421 5108 manager.go:217] Machine: {Timestamp:2026-02-19 00:09:01.689058865 +0000 UTC m=+0.655705243 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:d735bf3f-8433-4393-ae09-99790265e39c BootID:352aa3ad-02f7-4441-9880-46137003ff3d Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a1:dd:53 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a1:dd:53 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9a:5d:29 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:61:4c:dc Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:79:2c:e4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:25:f0:49 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:8c:ca:a6:34:7b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0e:6f:da:e6:b7:57 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.693311 5108 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.693718 5108 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.696472 5108 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.696540 5108 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.696867 5108 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.696887 5108 container_manager_linux.go:306] "Creating device plugin manager" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.696929 5108 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.697611 5108 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.698694 5108 state_mem.go:36] "Initialized new in-memory state store" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.699009 5108 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.701979 5108 kubelet.go:491] "Attempting to sync node with API server" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.702067 5108 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.702097 5108 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.702119 5108 kubelet.go:397] "Adding apiserver pod source" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.702153 5108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.708419 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.708480 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.709584 5108 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.709855 5108 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.712756 5108 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.712817 5108 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.717706 5108 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.718183 5108 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.719014 5108 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720040 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720072 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720084 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720096 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720107 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720117 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720127 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720137 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720150 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720173 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720190 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.720639 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.722117 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.722140 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.723861 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.746345 5108 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.746446 5108 server.go:1295] "Started kubelet" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.746649 5108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.746734 5108 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.746819 5108 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.747704 5108 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.749094 5108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.749608 5108 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 19 00:09:01 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.750054 5108 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.750094 5108 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.750131 5108 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.750095 5108 server.go:317] "Adding debug handlers to kubelet server" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.750533 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.750601 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.752265 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.753258 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18957d3fe0b2bceb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.746380011 +0000 UTC m=+0.713026359,LastTimestamp:2026-02-19 00:09:01.746380011 +0000 UTC m=+0.713026359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.761273 5108 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.761323 5108 factory.go:55] Registering systemd factory Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.761339 5108 factory.go:223] Registration of the systemd container factory successfully Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.761779 5108 factory.go:153] Registering CRI-O factory Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.761821 5108 factory.go:223] Registration of the crio container factory successfully Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.761855 5108 factory.go:103] Registering Raw factory Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.761877 5108 manager.go:1196] Started watching for new ooms in manager Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.763032 5108 manager.go:319] Starting recovery of all containers Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.779214 5108 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.793513 5108 manager.go:324] Recovery completed Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.807271 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.808628 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.808693 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.808707 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.817086 5108 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.817116 5108 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.817144 5108 state_mem.go:36] "Initialized new in-memory state store" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.830295 5108 policy_none.go:49] "None policy: Start" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.832870 5108 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.832956 5108 state_mem.go:35] "Initializing new in-memory state store" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.833723 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.833825 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.833848 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.833878 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836067 5108 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836106 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836124 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836145 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836164 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836190 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836212 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836228 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836270 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836308 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836327 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836350 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836365 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836379 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836396 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836410 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836424 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836438 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836453 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836474 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836489 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836503 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836522 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836539 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836557 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836596 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836609 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836624 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836641 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836662 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836677 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836691 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836707 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836723 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836737 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836755 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836768 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836782 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836799 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836846 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836863 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836882 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836900 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.836917 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837043 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837066 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837082 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837096 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837112 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837128 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837144 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837161 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837175 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837210 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837228 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837241 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837257 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837271 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837288 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837303 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837341 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837355 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837372 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837387 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837401 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837416 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837430 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837455 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837469 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837485 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837501 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837516 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837533 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837547 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837564 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837579 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837595 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837611 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837624 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837637 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837651 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837666 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837680 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837731 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837749 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837762 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837777 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837792 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837805 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837820 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837835 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837850 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837865 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837880 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837895 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837909 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.837923 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838002 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838017 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838032 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838045 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838057 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838074 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838087 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838102 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838114 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838126 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838140 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838154 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838184 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838198 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838212 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838226 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838244 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838265 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838282 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838301 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838320 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838342 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838363 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838382 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838402 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838422 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838452 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838470 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838495 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838513 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838531 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838547 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838568 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838585 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838602 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838618 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838635 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838656 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838674 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838691 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838709 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838727 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838744 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838786 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838806 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838822 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838840 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838857 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838875 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838892 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838910 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838926 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838972 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.838994 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839015 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839035 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839054 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839070 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839087 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839106 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839124 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839138 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839156 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839170 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839184 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839200 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839216 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839230 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839243 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839260 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839276 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839290 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839304 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839320 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839335 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839351 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839369 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839388 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839406 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839425 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839445 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839461 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839477 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839490 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839504 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839519 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839535 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839549 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839568 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839587 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839605 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839625 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839639 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839653 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839667 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839683 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839701 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839715 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839733 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839754 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839775 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839795 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839809 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839826 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839845 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839860 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839875 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839890 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839908 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839923 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839979 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.839996 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840011 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840025 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840039 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840054 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840070 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840085 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840100 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840114 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840128 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840144 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840158 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840174 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840230 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840243 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840257 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840271 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840284 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840300 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840316 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840330 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840345 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840380 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840399 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840418 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840437 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840451 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840466 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840480 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840496 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840518 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840534 5108 reconstruct.go:97] "Volume reconstruction finished" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.840545 5108 reconciler.go:26] "Reconciler: start to sync state" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.846663 5108 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.846738 5108 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.846775 5108 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.846795 5108 kubelet.go:2451] "Starting kubelet main sync loop" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.846972 5108 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.849171 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.850807 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.888006 5108 manager.go:341] "Starting Device Plugin manager" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.888279 5108 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.888305 5108 server.go:85] "Starting device plugin registration server" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.888807 5108 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.888827 5108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.889029 5108 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.889148 5108 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.889164 5108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.894435 5108 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.894495 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.947545 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.947793 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.949294 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.949371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.949385 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.951001 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.951250 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.951349 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.951463 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.952570 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.952590 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.952604 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.952766 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.952827 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.952849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.953565 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.953779 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.953836 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.954237 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.954300 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.954320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.954664 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.954695 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.954706 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.955492 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.955527 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.955559 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.956500 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.956524 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.956532 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.957251 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.957305 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.957320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.957343 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.957372 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.957281 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.957987 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.958013 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.958024 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.958320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.958363 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.958373 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.959138 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.959187 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.959996 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.960056 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.960067 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.987173 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.989359 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.990890 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.990968 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.990981 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[5108]: I0219 00:09:01.991016 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.995644 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 19 00:09:01 crc kubenswrapper[5108]: E0219 00:09:01.997228 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:02 crc kubenswrapper[5108]: E0219 00:09:02.023904 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:02 crc kubenswrapper[5108]: E0219 00:09:02.039970 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.043737 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.043813 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.043979 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.044020 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.044077 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.044277 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.044346 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.044417 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.044446 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.044497 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.046888 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: E0219 00:09:02.047535 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146170 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146221 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146260 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146285 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146313 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146332 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146353 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146374 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146396 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146416 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146440 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146460 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146479 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146498 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146517 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146538 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146557 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146580 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146602 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146622 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146641 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.146662 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147521 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147601 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147646 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147659 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147703 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147871 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147913 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147978 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147977 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.148030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147553 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.147683 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.148497 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.148552 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.148596 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.196598 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.197833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.197922 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.198001 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.198055 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:02 crc kubenswrapper[5108]: E0219 00:09:02.198758 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247254 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247336 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247350 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247372 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247390 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247445 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247467 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247498 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247492 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247487 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247559 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247599 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.247675 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.287498 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.299121 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.324658 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.341313 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: W0219 00:09:02.342677 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-e4548d2ca96ec295c66bcbb53157ebc026130cfb8d094d259b6fd28914ad6cde WatchSource:0}: Error finding container e4548d2ca96ec295c66bcbb53157ebc026130cfb8d094d259b6fd28914ad6cde: Status 404 returned error can't find the container with id e4548d2ca96ec295c66bcbb53157ebc026130cfb8d094d259b6fd28914ad6cde Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.348479 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:02 crc kubenswrapper[5108]: E0219 00:09:02.353084 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.358243 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:09:02 crc kubenswrapper[5108]: W0219 00:09:02.366854 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-1e2a5223e033f3695e2f4fa9230ff5adb264f2034ebd17ea673da08fae01a22d WatchSource:0}: Error finding container 1e2a5223e033f3695e2f4fa9230ff5adb264f2034ebd17ea673da08fae01a22d: Status 404 returned error can't find the container with id 1e2a5223e033f3695e2f4fa9230ff5adb264f2034ebd17ea673da08fae01a22d Feb 19 00:09:02 crc kubenswrapper[5108]: W0219 00:09:02.368997 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-9013a677e99e56a294376a4bcbc656b78655dd7a4007dc99a7c0f953dcd99931 WatchSource:0}: Error finding container 9013a677e99e56a294376a4bcbc656b78655dd7a4007dc99a7c0f953dcd99931: Status 404 returned error can't find the container with id 9013a677e99e56a294376a4bcbc656b78655dd7a4007dc99a7c0f953dcd99931 Feb 19 00:09:02 crc kubenswrapper[5108]: E0219 00:09:02.574868 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.599380 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.600446 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.600485 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.600497 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.600532 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:02 crc kubenswrapper[5108]: E0219 00:09:02.600984 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 19 00:09:02 crc kubenswrapper[5108]: E0219 00:09:02.723815 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.725124 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.852030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1e2a5223e033f3695e2f4fa9230ff5adb264f2034ebd17ea673da08fae01a22d"} Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.854493 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"e4548d2ca96ec295c66bcbb53157ebc026130cfb8d094d259b6fd28914ad6cde"} Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.859780 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"3c210d1127d5c6dcc525d05301f5cd4cc9152cbfb929ab26dae1d1c9749e1c8e"} Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.862966 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1bfdfda4b18076a90d3fdaa67c9e94056cb2d51d9b26b3266d10c6d304a03a7a"} Feb 19 00:09:02 crc kubenswrapper[5108]: I0219 00:09:02.868848 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9013a677e99e56a294376a4bcbc656b78655dd7a4007dc99a7c0f953dcd99931"} Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.087824 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.154239 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.265312 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.371486 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18957d3fe0b2bceb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.746380011 +0000 UTC m=+0.713026359,LastTimestamp:2026-02-19 00:09:01.746380011 +0000 UTC m=+0.713026359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.402112 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.403361 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.403441 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.403461 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.403503 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.404242 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.649670 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.650798 5108 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.725275 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.872608 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"912093548ad39f1b40ede6b3bc22fadc53b777d2469d2d448e0f027afe0265a4"} Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.872720 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95"} Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.874331 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9" exitCode=0 Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.874443 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9"} Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.874634 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.875599 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.875662 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.875672 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.875994 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.877604 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057" exitCode=0 Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.877733 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057"} Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.877776 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.877742 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.878466 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.878497 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.878506 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.878687 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.879250 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.879296 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.879311 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.880374 5108 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192" exitCode=0 Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.880431 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192"} Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.880529 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.881844 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.881920 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.881988 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.882360 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.883809 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729" exitCode=0 Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.883845 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729"} Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.884076 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.884873 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.884905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[5108]: I0219 00:09:03.884921 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[5108]: E0219 00:09:03.885136 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:04 crc kubenswrapper[5108]: E0219 00:09:04.678143 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.725221 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Feb 19 00:09:04 crc kubenswrapper[5108]: E0219 00:09:04.755314 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.890605 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.890672 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.890686 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.891409 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.893434 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.893463 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.893477 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[5108]: E0219 00:09:04.893783 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.895737 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"99dcdf8bfe6fbb0165aa178f2a1df3a6066225eba167dfca3dc1a45802496c1b"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.895802 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"71165a4e8b9865539510ee574a6e4c02ad7804a7183a1f0362c018cb5dc60a18"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.896004 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.897179 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.897218 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.897233 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[5108]: E0219 00:09:04.897480 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.899541 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.899569 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.899583 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.901162 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd" exitCode=0 Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.901231 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.901385 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.901762 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.901789 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.901802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[5108]: E0219 00:09:04.901988 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.905004 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404"} Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.905133 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.905516 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.905539 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[5108]: I0219 00:09:04.905552 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[5108]: E0219 00:09:04.905718 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.006637 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.013726 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.013786 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.013797 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.013828 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:05 crc kubenswrapper[5108]: E0219 00:09:05.014541 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Feb 19 00:09:05 crc kubenswrapper[5108]: E0219 00:09:05.029141 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.029926 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.915490 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"50c66ec26c5ddbb0102e6367e48d1ee153a770d6e6c14688f77cd14c8bb05e85"} Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.915852 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe"} Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.916371 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.917799 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.918215 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.919106 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[5108]: E0219 00:09:05.919911 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.922144 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015" exitCode=0 Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.922331 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015"} Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.922375 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.922469 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.922581 5108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.922623 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.922681 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.923610 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.923687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.923706 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[5108]: E0219 00:09:05.924272 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.924532 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.924584 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.924605 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.924876 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.924910 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.924925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[5108]: E0219 00:09:05.925180 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:05 crc kubenswrapper[5108]: E0219 00:09:05.925274 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.925328 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.925430 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[5108]: I0219 00:09:05.925452 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[5108]: E0219 00:09:05.925826 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.929923 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447"} Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.930006 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006"} Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.930018 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a"} Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.930175 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.930310 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.930361 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.931350 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.931379 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.931416 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.931464 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.931442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[5108]: I0219 00:09:06.931551 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[5108]: E0219 00:09:06.932060 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:06 crc kubenswrapper[5108]: E0219 00:09:06.932169 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.471665 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.750396 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.750671 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.751980 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.752031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.752052 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[5108]: E0219 00:09:07.752591 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.937975 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d"} Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.938038 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649"} Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.938188 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.938254 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.938204 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939312 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939379 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939381 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939438 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939456 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939399 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939544 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939570 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.939406 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[5108]: E0219 00:09:07.939781 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:07 crc kubenswrapper[5108]: E0219 00:09:07.940219 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:07 crc kubenswrapper[5108]: E0219 00:09:07.940538 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:07 crc kubenswrapper[5108]: I0219 00:09:07.995692 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.214717 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.215923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.216033 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.216054 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.216094 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.941248 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.942238 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.942311 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[5108]: I0219 00:09:08.942349 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[5108]: E0219 00:09:08.943173 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.461316 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.461755 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.463091 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.463169 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.463189 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[5108]: E0219 00:09:09.463780 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.860855 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.943565 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.944421 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.944484 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[5108]: I0219 00:09:09.944513 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[5108]: E0219 00:09:09.945340 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:11 crc kubenswrapper[5108]: I0219 00:09:11.045424 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:11 crc kubenswrapper[5108]: I0219 00:09:11.045710 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:11 crc kubenswrapper[5108]: I0219 00:09:11.046882 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[5108]: I0219 00:09:11.046992 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[5108]: I0219 00:09:11.047015 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[5108]: E0219 00:09:11.047520 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:11 crc kubenswrapper[5108]: E0219 00:09:11.894866 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:12 crc kubenswrapper[5108]: I0219 00:09:12.129960 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 19 00:09:12 crc kubenswrapper[5108]: I0219 00:09:12.130240 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:12 crc kubenswrapper[5108]: I0219 00:09:12.131409 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[5108]: I0219 00:09:12.131482 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[5108]: I0219 00:09:12.131502 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[5108]: E0219 00:09:12.132394 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.046238 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.046375 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.214259 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.214629 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.216606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.216666 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.216686 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[5108]: E0219 00:09:14.217180 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.225814 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.960446 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.961606 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.961678 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.961702 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[5108]: E0219 00:09:14.962347 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:14 crc kubenswrapper[5108]: I0219 00:09:14.967886 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.432814 5108 trace.go:236] Trace[346573041]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:09:05.431) (total time: 10001ms): Feb 19 00:09:15 crc kubenswrapper[5108]: Trace[346573041]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:15.432) Feb 19 00:09:15 crc kubenswrapper[5108]: Trace[346573041]: [10.001657762s] [10.001657762s] END Feb 19 00:09:15 crc kubenswrapper[5108]: E0219 00:09:15.432883 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.726465 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.760124 5108 trace.go:236] Trace[631689140]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:09:05.758) (total time: 10001ms): Feb 19 00:09:15 crc kubenswrapper[5108]: Trace[631689140]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:15.760) Feb 19 00:09:15 crc kubenswrapper[5108]: Trace[631689140]: [10.001908161s] [10.001908161s] END Feb 19 00:09:15 crc kubenswrapper[5108]: E0219 00:09:15.760155 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.938685 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.938762 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.946172 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.946255 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.963126 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.963817 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.963848 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[5108]: I0219 00:09:15.963859 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[5108]: E0219 00:09:15.964191 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.640482 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.641036 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.642428 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.642491 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.642512 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[5108]: E0219 00:09:17.643246 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.701827 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 00:09:17 crc kubenswrapper[5108]: E0219 00:09:17.956125 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.968714 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.969824 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.969890 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.969916 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[5108]: E0219 00:09:17.970873 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:17 crc kubenswrapper[5108]: I0219 00:09:17.984111 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 00:09:18 crc kubenswrapper[5108]: I0219 00:09:18.972157 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:18 crc kubenswrapper[5108]: I0219 00:09:18.973220 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[5108]: I0219 00:09:18.973323 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[5108]: I0219 00:09:18.973383 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[5108]: E0219 00:09:18.974135 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:19 crc kubenswrapper[5108]: E0219 00:09:19.412467 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.472901 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.473325 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.474788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.474835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.474849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[5108]: E0219 00:09:19.475484 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.483431 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:19 crc kubenswrapper[5108]: E0219 00:09:19.776758 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.974611 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.975507 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.975564 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[5108]: I0219 00:09:19.975605 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[5108]: E0219 00:09:19.976168 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:20 crc kubenswrapper[5108]: I0219 00:09:20.947522 5108 trace.go:236] Trace[2084918595]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:09:10.797) (total time: 10149ms): Feb 19 00:09:20 crc kubenswrapper[5108]: Trace[2084918595]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 10149ms (00:09:20.947) Feb 19 00:09:20 crc kubenswrapper[5108]: Trace[2084918595]: [10.149816922s] [10.149816922s] END Feb 19 00:09:20 crc kubenswrapper[5108]: I0219 00:09:20.948058 5108 trace.go:236] Trace[1798496867]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:09:10.218) (total time: 10729ms): Feb 19 00:09:20 crc kubenswrapper[5108]: Trace[1798496867]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 10729ms (00:09:20.948) Feb 19 00:09:20 crc kubenswrapper[5108]: Trace[1798496867]: [10.729114151s] [10.729114151s] END Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.948089 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.948091 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.948488 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe0b2bceb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.746380011 +0000 UTC m=+0.713026359,LastTimestamp:2026-02-19 00:09:01.746380011 +0000 UTC m=+0.713026359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.950694 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469491f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,LastTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.952262 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,LastTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.952378 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.957013 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469e06b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808713835 +0000 UTC m=+0.775360143,LastTimestamp:2026-02-19 00:09:01.808713835 +0000 UTC m=+0.775360143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.958262 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe9878f71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.894545265 +0000 UTC m=+0.861191573,LastTimestamp:2026-02-19 00:09:01.894545265 +0000 UTC m=+0.861191573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: I0219 00:09:20.961610 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.966369 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469491f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469491f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,LastTimestamp:2026-02-19 00:09:01.949352617 +0000 UTC m=+0.915998925,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.972243 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469aed8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,LastTimestamp:2026-02-19 00:09:01.949380038 +0000 UTC m=+0.916026346,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.980258 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469e06b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469e06b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808713835 +0000 UTC m=+0.775360143,LastTimestamp:2026-02-19 00:09:01.94943364 +0000 UTC m=+0.916079948,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.987751 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469491f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469491f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,LastTimestamp:2026-02-19 00:09:01.952582845 +0000 UTC m=+0.919229153,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.993875 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469aed8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,LastTimestamp:2026-02-19 00:09:01.952596975 +0000 UTC m=+0.919243273,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:20 crc kubenswrapper[5108]: E0219 00:09:20.999658 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469e06b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469e06b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808713835 +0000 UTC m=+0.775360143,LastTimestamp:2026-02-19 00:09:01.952608946 +0000 UTC m=+0.919255244,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.012239 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469491f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469491f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,LastTimestamp:2026-02-19 00:09:01.952801122 +0000 UTC m=+0.919447440,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.018679 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469aed8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,LastTimestamp:2026-02-19 00:09:01.952841613 +0000 UTC m=+0.919487931,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.024158 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469e06b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469e06b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808713835 +0000 UTC m=+0.775360143,LastTimestamp:2026-02-19 00:09:01.952856134 +0000 UTC m=+0.919502462,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.028588 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469491f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469491f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,LastTimestamp:2026-02-19 00:09:01.954277701 +0000 UTC m=+0.920924029,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.033147 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469aed8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,LastTimestamp:2026-02-19 00:09:01.954310642 +0000 UTC m=+0.920956960,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.038856 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469e06b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469e06b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808713835 +0000 UTC m=+0.775360143,LastTimestamp:2026-02-19 00:09:01.954340393 +0000 UTC m=+0.920986711,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.044592 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469491f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469491f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,LastTimestamp:2026-02-19 00:09:01.954680466 +0000 UTC m=+0.921326774,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.053404 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469aed8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,LastTimestamp:2026-02-19 00:09:01.954702326 +0000 UTC m=+0.921348634,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.057198 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.057380 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.058408 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.058453 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.058468 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.058534 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469e06b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469e06b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808713835 +0000 UTC m=+0.775360143,LastTimestamp:2026-02-19 00:09:01.954713607 +0000 UTC m=+0.921359915,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.058830 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.066053 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469491f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469491f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,LastTimestamp:2026-02-19 00:09:01.956517647 +0000 UTC m=+0.923163955,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.069450 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.070533 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469aed8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,LastTimestamp:2026-02-19 00:09:01.956529297 +0000 UTC m=+0.923175605,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.077195 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469e06b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469e06b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808713835 +0000 UTC m=+0.775360143,LastTimestamp:2026-02-19 00:09:01.956548568 +0000 UTC m=+0.923194876,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.083330 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469491f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469491f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808675103 +0000 UTC m=+0.775321411,LastTimestamp:2026-02-19 00:09:01.957291282 +0000 UTC m=+0.923937600,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.088323 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d3fe469aed8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d3fe469aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:01.808701144 +0000 UTC m=+0.775347452,LastTimestamp:2026-02-19 00:09:01.957314403 +0000 UTC m=+0.923960731,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.095621 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d4005320655 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:02.358701653 +0000 UTC m=+1.325348001,LastTimestamp:2026-02-19 00:09:02.358701653 +0000 UTC m=+1.325348001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.100696 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d400533c6dc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:02.358816476 +0000 UTC m=+1.325462804,LastTimestamp:2026-02-19 00:09:02.358816476 +0000 UTC m=+1.325462804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.105959 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4005d07206 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:02.36908391 +0000 UTC m=+1.335730218,LastTimestamp:2026-02-19 00:09:02.36908391 +0000 UTC m=+1.335730218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.111486 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d4006254ed9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:02.374645465 +0000 UTC m=+1.341291813,LastTimestamp:2026-02-19 00:09:02.374645465 +0000 UTC m=+1.341291813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.116890 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d40062596ed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:02.374663917 +0000 UTC m=+1.341310245,LastTimestamp:2026-02-19 00:09:02.374663917 +0000 UTC m=+1.341310245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.122490 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d402ec20c75 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.056006261 +0000 UTC m=+2.022652589,LastTimestamp:2026-02-19 00:09:03.056006261 +0000 UTC m=+2.022652589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.127051 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d402ec44184 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.056150916 +0000 UTC m=+2.022797244,LastTimestamp:2026-02-19 00:09:03.056150916 +0000 UTC m=+2.022797244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.132565 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d402ec420b3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.056142515 +0000 UTC m=+2.022788823,LastTimestamp:2026-02-19 00:09:03.056142515 +0000 UTC m=+2.022788823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.138204 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d402ec55bf2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.056223218 +0000 UTC m=+2.022869526,LastTimestamp:2026-02-19 00:09:03.056223218 +0000 UTC m=+2.022869526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.142713 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d402ec69aa0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.0563048 +0000 UTC m=+2.022951138,LastTimestamp:2026-02-19 00:09:03.0563048 +0000 UTC m=+2.022951138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.147368 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d402f779b92 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.067904914 +0000 UTC m=+2.034551232,LastTimestamp:2026-02-19 00:09:03.067904914 +0000 UTC m=+2.034551232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.153404 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d402fc055ea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.07267121 +0000 UTC m=+2.039317528,LastTimestamp:2026-02-19 00:09:03.07267121 +0000 UTC m=+2.039317528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.159652 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d402ff2759c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.075956124 +0000 UTC m=+2.042602452,LastTimestamp:2026-02-19 00:09:03.075956124 +0000 UTC m=+2.042602452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.163646 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d402ff89884 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.076358276 +0000 UTC m=+2.043004584,LastTimestamp:2026-02-19 00:09:03.076358276 +0000 UTC m=+2.043004584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.165258 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d40300c3003 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.077642243 +0000 UTC m=+2.044288561,LastTimestamp:2026-02-19 00:09:03.077642243 +0000 UTC m=+2.044288561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.170179 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d4030121f8c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.078031244 +0000 UTC m=+2.044677592,LastTimestamp:2026-02-19 00:09:03.078031244 +0000 UTC m=+2.044677592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.176211 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d40433a5788 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.39943412 +0000 UTC m=+2.366080458,LastTimestamp:2026-02-19 00:09:03.39943412 +0000 UTC m=+2.366080458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.181294 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d40442dfaa9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.415401129 +0000 UTC m=+2.382047467,LastTimestamp:2026-02-19 00:09:03.415401129 +0000 UTC m=+2.382047467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.187062 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d404447689d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.417067677 +0000 UTC m=+2.383714015,LastTimestamp:2026-02-19 00:09:03.417067677 +0000 UTC m=+2.383714015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.193714 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d405faadddc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.876570588 +0000 UTC m=+2.843216926,LastTimestamp:2026-02-19 00:09:03.876570588 +0000 UTC m=+2.843216926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.203372 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d405fb9d4b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.877551286 +0000 UTC m=+2.844197604,LastTimestamp:2026-02-19 00:09:03.877551286 +0000 UTC m=+2.844197604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.205638 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44780->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.205696 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44780->192.168.126.11:17697: read: connection reset by peer" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.205702 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44790->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.205775 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44790->192.168.126.11:17697: read: connection reset by peer" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.206122 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.206150 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.209748 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d405fdf4201 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.880004097 +0000 UTC m=+2.846650405,LastTimestamp:2026-02-19 00:09:03.880004097 +0000 UTC m=+2.846650405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.214541 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d40602e34d1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.885178065 +0000 UTC m=+2.851824403,LastTimestamp:2026-02-19 00:09:03.885178065 +0000 UTC m=+2.851824403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.218982 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d40603ec87e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.886264446 +0000 UTC m=+2.852910754,LastTimestamp:2026-02-19 00:09:03.886264446 +0000 UTC m=+2.852910754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.222907 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d406158cbc4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.904746436 +0000 UTC m=+2.871392744,LastTimestamp:2026-02-19 00:09:03.904746436 +0000 UTC m=+2.871392744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.226674 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d4061f6816e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:03.915082094 +0000 UTC m=+2.881728402,LastTimestamp:2026-02-19 00:09:03.915082094 +0000 UTC m=+2.881728402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.230601 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4072d25c37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.197925943 +0000 UTC m=+3.164572251,LastTimestamp:2026-02-19 00:09:04.197925943 +0000 UTC m=+3.164572251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.234313 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d4072fa08c2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.200526018 +0000 UTC m=+3.167172326,LastTimestamp:2026-02-19 00:09:04.200526018 +0000 UTC m=+3.167172326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.238387 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d4072fb6dd7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.200617431 +0000 UTC m=+3.167263739,LastTimestamp:2026-02-19 00:09:04.200617431 +0000 UTC m=+3.167263739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.242541 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d4073000cd7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.200920279 +0000 UTC m=+3.167566587,LastTimestamp:2026-02-19 00:09:04.200920279 +0000 UTC m=+3.167566587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.246482 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d407310e293 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.202023571 +0000 UTC m=+3.168669879,LastTimestamp:2026-02-19 00:09:04.202023571 +0000 UTC m=+3.168669879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.250577 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4073975c68 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.210836584 +0000 UTC m=+3.177482892,LastTimestamp:2026-02-19 00:09:04.210836584 +0000 UTC m=+3.177482892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.254223 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4073a86abe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.211954366 +0000 UTC m=+3.178600674,LastTimestamp:2026-02-19 00:09:04.211954366 +0000 UTC m=+3.178600674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.257744 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d40740b9a9a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.218454682 +0000 UTC m=+3.185100990,LastTimestamp:2026-02-19 00:09:04.218454682 +0000 UTC m=+3.185100990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.262320 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d40741a4656 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.21941615 +0000 UTC m=+3.186062458,LastTimestamp:2026-02-19 00:09:04.21941615 +0000 UTC m=+3.186062458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.267299 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d40745a6421 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.223618081 +0000 UTC m=+3.190264389,LastTimestamp:2026-02-19 00:09:04.223618081 +0000 UTC m=+3.190264389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.271752 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40746ca2f7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.224813815 +0000 UTC m=+3.191460123,LastTimestamp:2026-02-19 00:09:04.224813815 +0000 UTC m=+3.191460123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.276492 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d40746f9ab6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.22500831 +0000 UTC m=+3.191654618,LastTimestamp:2026-02-19 00:09:04.22500831 +0000 UTC m=+3.191654618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.280581 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d407fbc5cb2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.414588082 +0000 UTC m=+3.381234390,LastTimestamp:2026-02-19 00:09:04.414588082 +0000 UTC m=+3.381234390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.284269 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d407fe4cbd1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.417237969 +0000 UTC m=+3.383884277,LastTimestamp:2026-02-19 00:09:04.417237969 +0000 UTC m=+3.383884277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.289738 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d40807f6e26 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.42737207 +0000 UTC m=+3.394018378,LastTimestamp:2026-02-19 00:09:04.42737207 +0000 UTC m=+3.394018378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.293350 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d40809606e3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.428852963 +0000 UTC m=+3.395499271,LastTimestamp:2026-02-19 00:09:04.428852963 +0000 UTC m=+3.395499271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.297005 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d408120fee8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.437960424 +0000 UTC m=+3.404606732,LastTimestamp:2026-02-19 00:09:04.437960424 +0000 UTC m=+3.404606732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.300449 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d408147f613 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.440514067 +0000 UTC m=+3.407160375,LastTimestamp:2026-02-19 00:09:04.440514067 +0000 UTC m=+3.407160375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.303722 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d408e09cbf6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.654543862 +0000 UTC m=+3.621190160,LastTimestamp:2026-02-19 00:09:04.654543862 +0000 UTC m=+3.621190160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.306903 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d408e49709d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.658714781 +0000 UTC m=+3.625361089,LastTimestamp:2026-02-19 00:09:04.658714781 +0000 UTC m=+3.625361089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.311035 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d408f248987 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.673073543 +0000 UTC m=+3.639719851,LastTimestamp:2026-02-19 00:09:04.673073543 +0000 UTC m=+3.639719851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.314797 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d408f65b89c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.677345436 +0000 UTC m=+3.643991764,LastTimestamp:2026-02-19 00:09:04.677345436 +0000 UTC m=+3.643991764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.326925 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d408f7b3e60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.678755936 +0000 UTC m=+3.645402254,LastTimestamp:2026-02-19 00:09:04.678755936 +0000 UTC m=+3.645402254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.334281 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d409cd88aab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.902974123 +0000 UTC m=+3.869620441,LastTimestamp:2026-02-19 00:09:04.902974123 +0000 UTC m=+3.869620441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.339929 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d409e56876a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.928008042 +0000 UTC m=+3.894654360,LastTimestamp:2026-02-19 00:09:04.928008042 +0000 UTC m=+3.894654360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.344163 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d409f661987 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.945805703 +0000 UTC m=+3.912452021,LastTimestamp:2026-02-19 00:09:04.945805703 +0000 UTC m=+3.912452021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.350531 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d409f81fba9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.947633065 +0000 UTC m=+3.914279373,LastTimestamp:2026-02-19 00:09:04.947633065 +0000 UTC m=+3.914279373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.354759 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40a9c96f9e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.120087966 +0000 UTC m=+4.086734274,LastTimestamp:2026-02-19 00:09:05.120087966 +0000 UTC m=+4.086734274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.358543 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40aad28695 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.137460885 +0000 UTC m=+4.104107193,LastTimestamp:2026-02-19 00:09:05.137460885 +0000 UTC m=+4.104107193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.362233 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d40b26d1bdd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.265032157 +0000 UTC m=+4.231678465,LastTimestamp:2026-02-19 00:09:05.265032157 +0000 UTC m=+4.231678465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.366131 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d40b31ac3a2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.276412834 +0000 UTC m=+4.243059152,LastTimestamp:2026-02-19 00:09:05.276412834 +0000 UTC m=+4.243059152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.371878 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40d9f60dd2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.928318418 +0000 UTC m=+4.894964736,LastTimestamp:2026-02-19 00:09:05.928318418 +0000 UTC m=+4.894964736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.372899 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40e8b97a7a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.176006778 +0000 UTC m=+5.142653076,LastTimestamp:2026-02-19 00:09:06.176006778 +0000 UTC m=+5.142653076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.378860 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40e99449d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.19034671 +0000 UTC m=+5.156993048,LastTimestamp:2026-02-19 00:09:06.19034671 +0000 UTC m=+5.156993048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.382711 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40e9af3715 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.192111381 +0000 UTC m=+5.158757719,LastTimestamp:2026-02-19 00:09:06.192111381 +0000 UTC m=+5.158757719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.387821 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40f682c1ad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.407301549 +0000 UTC m=+5.373947857,LastTimestamp:2026-02-19 00:09:06.407301549 +0000 UTC m=+5.373947857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.394608 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40f7683ebe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.42234131 +0000 UTC m=+5.388987638,LastTimestamp:2026-02-19 00:09:06.42234131 +0000 UTC m=+5.388987638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.399352 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d40f77c59cd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.423658957 +0000 UTC m=+5.390305265,LastTimestamp:2026-02-19 00:09:06.423658957 +0000 UTC m=+5.390305265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.403087 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d4106d2febb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.680995515 +0000 UTC m=+5.647641863,LastTimestamp:2026-02-19 00:09:06.680995515 +0000 UTC m=+5.647641863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.406566 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d4107c8f4ea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.697114858 +0000 UTC m=+5.663761206,LastTimestamp:2026-02-19 00:09:06.697114858 +0000 UTC m=+5.663761206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.410573 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d4107e3135b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.698826587 +0000 UTC m=+5.665472895,LastTimestamp:2026-02-19 00:09:06.698826587 +0000 UTC m=+5.665472895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.415023 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d41153f130f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.922959631 +0000 UTC m=+5.889605959,LastTimestamp:2026-02-19 00:09:06.922959631 +0000 UTC m=+5.889605959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.420052 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d41162e5eac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.938642092 +0000 UTC m=+5.905288410,LastTimestamp:2026-02-19 00:09:06.938642092 +0000 UTC m=+5.905288410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.425120 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d411649cda5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:06.940439973 +0000 UTC m=+5.907086291,LastTimestamp:2026-02-19 00:09:06.940439973 +0000 UTC m=+5.907086291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.430250 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d4122d7c7b8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:07.15107116 +0000 UTC m=+6.117717498,LastTimestamp:2026-02-19 00:09:07.15107116 +0000 UTC m=+6.117717498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.434853 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d4123b2b6d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:07.165419222 +0000 UTC m=+6.132065570,LastTimestamp:2026-02-19 00:09:07.165419222 +0000 UTC m=+6.132065570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.441370 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 19 00:09:21 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-controller-manager-crc.18957d42bdd513a0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Feb 19 00:09:21 crc kubenswrapper[5108]: body: Feb 19 00:09:21 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:14.04632976 +0000 UTC m=+13.012976108,LastTimestamp:2026-02-19 00:09:14.04632976 +0000 UTC m=+13.012976108,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:21 crc kubenswrapper[5108]: > Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.446467 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d42bdd70bdc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:14.046458844 +0000 UTC m=+13.013105192,LastTimestamp:2026-02-19 00:09:14.046458844 +0000 UTC m=+13.013105192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.451587 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:21 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18957d432ea0f57d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 19 00:09:21 crc kubenswrapper[5108]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:09:21 crc kubenswrapper[5108]: Feb 19 00:09:21 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:15.938739581 +0000 UTC m=+14.905385899,LastTimestamp:2026-02-19 00:09:15.938739581 +0000 UTC m=+14.905385899,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:21 crc kubenswrapper[5108]: > Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.459751 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d432ea1ab18 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:15.938786072 +0000 UTC m=+14.905432400,LastTimestamp:2026-02-19 00:09:15.938786072 +0000 UTC m=+14.905432400,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.463916 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d432ea0f57d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:21 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18957d432ea0f57d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 19 00:09:21 crc kubenswrapper[5108]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:09:21 crc kubenswrapper[5108]: Feb 19 00:09:21 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:15.938739581 +0000 UTC m=+14.905385899,LastTimestamp:2026-02-19 00:09:15.946225307 +0000 UTC m=+14.912871625,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:21 crc kubenswrapper[5108]: > Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.472769 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d432ea1ab18\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d432ea1ab18 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:15.938786072 +0000 UTC m=+14.905432400,LastTimestamp:2026-02-19 00:09:15.946286249 +0000 UTC m=+14.912932577,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.479707 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:21 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18957d44689003cc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44780->192.168.126.11:17697: read: connection reset by peer Feb 19 00:09:21 crc kubenswrapper[5108]: body: Feb 19 00:09:21 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:21.205674956 +0000 UTC m=+20.172321274,LastTimestamp:2026-02-19 00:09:21.205674956 +0000 UTC m=+20.172321274,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:21 crc kubenswrapper[5108]: > Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.484255 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d446890a657 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44780->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:21.205716567 +0000 UTC m=+20.172362885,LastTimestamp:2026-02-19 00:09:21.205716567 +0000 UTC m=+20.172362885,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.485919 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:21 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18957d44689125e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44790->192.168.126.11:17697: read: connection reset by peer Feb 19 00:09:21 crc kubenswrapper[5108]: body: Feb 19 00:09:21 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:21.205749218 +0000 UTC m=+20.172395526,LastTimestamp:2026-02-19 00:09:21.205749218 +0000 UTC m=+20.172395526,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:21 crc kubenswrapper[5108]: > Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.491005 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4468920f03 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44790->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:21.205808899 +0000 UTC m=+20.172455207,LastTimestamp:2026-02-19 00:09:21.205808899 +0000 UTC m=+20.172455207,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.497619 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:21 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18957d4468971ad8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 19 00:09:21 crc kubenswrapper[5108]: body: Feb 19 00:09:21 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:21.206139608 +0000 UTC m=+20.172785926,LastTimestamp:2026-02-19 00:09:21.206139608 +0000 UTC m=+20.172785926,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:21 crc kubenswrapper[5108]: > Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.501517 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4468977a50 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:21.206164048 +0000 UTC m=+20.172810356,LastTimestamp:2026-02-19 00:09:21.206164048 +0000 UTC m=+20.172810356,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.731007 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.895198 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.981699 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.983486 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="50c66ec26c5ddbb0102e6367e48d1ee153a770d6e6c14688f77cd14c8bb05e85" exitCode=255 Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.983577 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"50c66ec26c5ddbb0102e6367e48d1ee153a770d6e6c14688f77cd14c8bb05e85"} Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.983711 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.983816 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.984451 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.984480 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.984494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.984525 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.984543 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.984555 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.985358 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.985366 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:21 crc kubenswrapper[5108]: I0219 00:09:21.985633 5108 scope.go:117] "RemoveContainer" containerID="50c66ec26c5ddbb0102e6367e48d1ee153a770d6e6c14688f77cd14c8bb05e85" Feb 19 00:09:21 crc kubenswrapper[5108]: E0219 00:09:21.996861 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d409f81fba9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d409f81fba9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.947633065 +0000 UTC m=+3.914279373,LastTimestamp:2026-02-19 00:09:21.986776044 +0000 UTC m=+20.953422352,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:22 crc kubenswrapper[5108]: E0219 00:09:22.213608 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d40b26d1bdd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d40b26d1bdd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.265032157 +0000 UTC m=+4.231678465,LastTimestamp:2026-02-19 00:09:22.208790766 +0000 UTC m=+21.175437074,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:22 crc kubenswrapper[5108]: E0219 00:09:22.225632 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d40b31ac3a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d40b31ac3a2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.276412834 +0000 UTC m=+4.243059152,LastTimestamp:2026-02-19 00:09:22.222108175 +0000 UTC m=+21.188754483,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:22 crc kubenswrapper[5108]: I0219 00:09:22.729370 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:22 crc kubenswrapper[5108]: I0219 00:09:22.988891 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 19 00:09:22 crc kubenswrapper[5108]: I0219 00:09:22.990689 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"860fcf213fb72e1a6465546bf39f48dac944e566f31299fb9f0ceff47365e239"} Feb 19 00:09:22 crc kubenswrapper[5108]: I0219 00:09:22.990892 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:22 crc kubenswrapper[5108]: I0219 00:09:22.991547 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[5108]: I0219 00:09:22.991594 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[5108]: I0219 00:09:22.991608 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[5108]: E0219 00:09:22.992024 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.731179 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.995375 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.996186 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.998097 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="860fcf213fb72e1a6465546bf39f48dac944e566f31299fb9f0ceff47365e239" exitCode=255 Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.998222 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"860fcf213fb72e1a6465546bf39f48dac944e566f31299fb9f0ceff47365e239"} Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.998287 5108 scope.go:117] "RemoveContainer" containerID="50c66ec26c5ddbb0102e6367e48d1ee153a770d6e6c14688f77cd14c8bb05e85" Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.998726 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.999359 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.999395 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[5108]: I0219 00:09:23.999409 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[5108]: E0219 00:09:23.999755 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:24 crc kubenswrapper[5108]: I0219 00:09:24.000050 5108 scope.go:117] "RemoveContainer" containerID="860fcf213fb72e1a6465546bf39f48dac944e566f31299fb9f0ceff47365e239" Feb 19 00:09:24 crc kubenswrapper[5108]: E0219 00:09:24.000268 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:09:24 crc kubenswrapper[5108]: E0219 00:09:24.006685 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d450f219439 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:24.000232505 +0000 UTC m=+22.966878823,LastTimestamp:2026-02-19 00:09:24.000232505 +0000 UTC m=+22.966878823,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:24 crc kubenswrapper[5108]: E0219 00:09:24.357629 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:09:24 crc kubenswrapper[5108]: I0219 00:09:24.729415 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:25 crc kubenswrapper[5108]: I0219 00:09:25.003392 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 19 00:09:25 crc kubenswrapper[5108]: I0219 00:09:25.729179 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:26 crc kubenswrapper[5108]: I0219 00:09:26.728411 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:27 crc kubenswrapper[5108]: E0219 00:09:27.047456 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:27 crc kubenswrapper[5108]: I0219 00:09:27.352806 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:27 crc kubenswrapper[5108]: I0219 00:09:27.354043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:27 crc kubenswrapper[5108]: I0219 00:09:27.354115 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:27 crc kubenswrapper[5108]: I0219 00:09:27.354136 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:27 crc kubenswrapper[5108]: I0219 00:09:27.354175 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:27 crc kubenswrapper[5108]: E0219 00:09:27.363808 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:09:27 crc kubenswrapper[5108]: I0219 00:09:27.732428 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:28 crc kubenswrapper[5108]: I0219 00:09:28.734529 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:29 crc kubenswrapper[5108]: I0219 00:09:29.232801 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:29 crc kubenswrapper[5108]: I0219 00:09:29.233191 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:29 crc kubenswrapper[5108]: I0219 00:09:29.234432 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:29 crc kubenswrapper[5108]: I0219 00:09:29.234499 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:29 crc kubenswrapper[5108]: I0219 00:09:29.234518 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:29 crc kubenswrapper[5108]: E0219 00:09:29.235128 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:29 crc kubenswrapper[5108]: I0219 00:09:29.235551 5108 scope.go:117] "RemoveContainer" containerID="860fcf213fb72e1a6465546bf39f48dac944e566f31299fb9f0ceff47365e239" Feb 19 00:09:29 crc kubenswrapper[5108]: E0219 00:09:29.235923 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:09:29 crc kubenswrapper[5108]: E0219 00:09:29.245512 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d450f219439\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d450f219439 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:24.000232505 +0000 UTC m=+22.966878823,LastTimestamp:2026-02-19 00:09:29.235859905 +0000 UTC m=+28.202506243,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:29 crc kubenswrapper[5108]: E0219 00:09:29.245908 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:29 crc kubenswrapper[5108]: I0219 00:09:29.730441 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:30 crc kubenswrapper[5108]: I0219 00:09:30.728883 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:31 crc kubenswrapper[5108]: E0219 00:09:31.365163 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:09:31 crc kubenswrapper[5108]: I0219 00:09:31.731826 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:31 crc kubenswrapper[5108]: E0219 00:09:31.895561 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:32 crc kubenswrapper[5108]: E0219 00:09:32.552302 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:32 crc kubenswrapper[5108]: I0219 00:09:32.735257 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:32 crc kubenswrapper[5108]: E0219 00:09:32.792767 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:32 crc kubenswrapper[5108]: I0219 00:09:32.991915 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:32 crc kubenswrapper[5108]: I0219 00:09:32.992268 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:32 crc kubenswrapper[5108]: I0219 00:09:32.993369 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:32 crc kubenswrapper[5108]: I0219 00:09:32.993499 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:32 crc kubenswrapper[5108]: I0219 00:09:32.993513 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:32 crc kubenswrapper[5108]: E0219 00:09:32.994081 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:32 crc kubenswrapper[5108]: I0219 00:09:32.994430 5108 scope.go:117] "RemoveContainer" containerID="860fcf213fb72e1a6465546bf39f48dac944e566f31299fb9f0ceff47365e239" Feb 19 00:09:32 crc kubenswrapper[5108]: E0219 00:09:32.994744 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:09:33 crc kubenswrapper[5108]: E0219 00:09:33.001425 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d450f219439\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d450f219439 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:24.000232505 +0000 UTC m=+22.966878823,LastTimestamp:2026-02-19 00:09:32.994703598 +0000 UTC m=+31.961349906,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:33 crc kubenswrapper[5108]: I0219 00:09:33.732861 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:34 crc kubenswrapper[5108]: I0219 00:09:34.365013 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:34 crc kubenswrapper[5108]: I0219 00:09:34.366348 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:34 crc kubenswrapper[5108]: I0219 00:09:34.366444 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:34 crc kubenswrapper[5108]: I0219 00:09:34.366464 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:34 crc kubenswrapper[5108]: I0219 00:09:34.366509 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:34 crc kubenswrapper[5108]: E0219 00:09:34.382022 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:09:34 crc kubenswrapper[5108]: I0219 00:09:34.732298 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:35 crc kubenswrapper[5108]: I0219 00:09:35.732507 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:36 crc kubenswrapper[5108]: I0219 00:09:36.729625 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:37 crc kubenswrapper[5108]: I0219 00:09:37.731906 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:38 crc kubenswrapper[5108]: E0219 00:09:38.371854 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:09:38 crc kubenswrapper[5108]: I0219 00:09:38.731403 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:39 crc kubenswrapper[5108]: I0219 00:09:39.732322 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:40 crc kubenswrapper[5108]: I0219 00:09:40.732323 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:41 crc kubenswrapper[5108]: I0219 00:09:41.382709 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:41 crc kubenswrapper[5108]: I0219 00:09:41.383807 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:41 crc kubenswrapper[5108]: I0219 00:09:41.383861 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:41 crc kubenswrapper[5108]: I0219 00:09:41.383876 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:41 crc kubenswrapper[5108]: I0219 00:09:41.383905 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:41 crc kubenswrapper[5108]: E0219 00:09:41.398235 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:09:41 crc kubenswrapper[5108]: I0219 00:09:41.732672 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:41 crc kubenswrapper[5108]: E0219 00:09:41.896277 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:42 crc kubenswrapper[5108]: I0219 00:09:42.728182 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:43 crc kubenswrapper[5108]: I0219 00:09:43.731733 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:44 crc kubenswrapper[5108]: I0219 00:09:44.733081 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:45 crc kubenswrapper[5108]: E0219 00:09:45.378726 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:09:45 crc kubenswrapper[5108]: I0219 00:09:45.732739 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:46 crc kubenswrapper[5108]: I0219 00:09:46.733800 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:47 crc kubenswrapper[5108]: E0219 00:09:47.611444 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:47 crc kubenswrapper[5108]: I0219 00:09:47.733527 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:47 crc kubenswrapper[5108]: I0219 00:09:47.847782 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:47 crc kubenswrapper[5108]: I0219 00:09:47.849182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:47 crc kubenswrapper[5108]: I0219 00:09:47.849266 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:47 crc kubenswrapper[5108]: I0219 00:09:47.849289 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:47 crc kubenswrapper[5108]: E0219 00:09:47.850188 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:47 crc kubenswrapper[5108]: I0219 00:09:47.850679 5108 scope.go:117] "RemoveContainer" containerID="860fcf213fb72e1a6465546bf39f48dac944e566f31299fb9f0ceff47365e239" Feb 19 00:09:47 crc kubenswrapper[5108]: E0219 00:09:47.864744 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d409f81fba9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d409f81fba9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:04.947633065 +0000 UTC m=+3.914279373,LastTimestamp:2026-02-19 00:09:47.852608701 +0000 UTC m=+46.819255049,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:48 crc kubenswrapper[5108]: I0219 00:09:48.073633 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 19 00:09:48 crc kubenswrapper[5108]: E0219 00:09:48.075881 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d40b26d1bdd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d40b26d1bdd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.265032157 +0000 UTC m=+4.231678465,LastTimestamp:2026-02-19 00:09:48.068412561 +0000 UTC m=+47.035058869,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:48 crc kubenswrapper[5108]: E0219 00:09:48.083843 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d40b31ac3a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d40b31ac3a2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:05.276412834 +0000 UTC m=+4.243059152,LastTimestamp:2026-02-19 00:09:48.082242102 +0000 UTC m=+47.048888410,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:48 crc kubenswrapper[5108]: I0219 00:09:48.399342 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:48 crc kubenswrapper[5108]: I0219 00:09:48.400379 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:48 crc kubenswrapper[5108]: I0219 00:09:48.400435 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:48 crc kubenswrapper[5108]: I0219 00:09:48.400460 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:48 crc kubenswrapper[5108]: I0219 00:09:48.400500 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:48 crc kubenswrapper[5108]: E0219 00:09:48.417034 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:09:48 crc kubenswrapper[5108]: I0219 00:09:48.732310 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:49 crc kubenswrapper[5108]: I0219 00:09:49.082126 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 19 00:09:49 crc kubenswrapper[5108]: I0219 00:09:49.084478 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"03a51766f10b40427c83e726e5b8cffb7f36210d29a958e4c2445cbb435b5d41"} Feb 19 00:09:49 crc kubenswrapper[5108]: I0219 00:09:49.084800 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:49 crc kubenswrapper[5108]: I0219 00:09:49.085644 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:49 crc kubenswrapper[5108]: I0219 00:09:49.085703 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:49 crc kubenswrapper[5108]: I0219 00:09:49.085723 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:49 crc kubenswrapper[5108]: E0219 00:09:49.086276 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:49 crc kubenswrapper[5108]: I0219 00:09:49.730678 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:50 crc kubenswrapper[5108]: E0219 00:09:50.082644 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.089136 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.089782 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.091894 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="03a51766f10b40427c83e726e5b8cffb7f36210d29a958e4c2445cbb435b5d41" exitCode=255 Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.091965 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"03a51766f10b40427c83e726e5b8cffb7f36210d29a958e4c2445cbb435b5d41"} Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.092023 5108 scope.go:117] "RemoveContainer" containerID="860fcf213fb72e1a6465546bf39f48dac944e566f31299fb9f0ceff47365e239" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.092299 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.093355 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.093418 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.093443 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:50 crc kubenswrapper[5108]: E0219 00:09:50.094167 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.094597 5108 scope.go:117] "RemoveContainer" containerID="03a51766f10b40427c83e726e5b8cffb7f36210d29a958e4c2445cbb435b5d41" Feb 19 00:09:50 crc kubenswrapper[5108]: E0219 00:09:50.095001 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:09:50 crc kubenswrapper[5108]: E0219 00:09:50.102631 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d450f219439\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d450f219439 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:24.000232505 +0000 UTC m=+22.966878823,LastTimestamp:2026-02-19 00:09:50.094890142 +0000 UTC m=+49.061536490,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5108]: I0219 00:09:50.732180 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:51 crc kubenswrapper[5108]: I0219 00:09:51.096718 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 19 00:09:51 crc kubenswrapper[5108]: I0219 00:09:51.732224 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:51 crc kubenswrapper[5108]: E0219 00:09:51.863623 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:51 crc kubenswrapper[5108]: E0219 00:09:51.896854 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:52 crc kubenswrapper[5108]: E0219 00:09:52.387553 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:09:52 crc kubenswrapper[5108]: I0219 00:09:52.730352 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:53 crc kubenswrapper[5108]: I0219 00:09:53.730714 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:53 crc kubenswrapper[5108]: E0219 00:09:53.894231 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:54 crc kubenswrapper[5108]: I0219 00:09:54.731911 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:55 crc kubenswrapper[5108]: I0219 00:09:55.417397 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:55 crc kubenswrapper[5108]: I0219 00:09:55.418878 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:55 crc kubenswrapper[5108]: I0219 00:09:55.418962 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:55 crc kubenswrapper[5108]: I0219 00:09:55.418984 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:55 crc kubenswrapper[5108]: I0219 00:09:55.419029 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:55 crc kubenswrapper[5108]: E0219 00:09:55.427912 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:09:55 crc kubenswrapper[5108]: I0219 00:09:55.731359 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:57 crc kubenswrapper[5108]: I0219 00:09:56.731374 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:57 crc kubenswrapper[5108]: I0219 00:09:57.732566 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:57 crc kubenswrapper[5108]: I0219 00:09:57.758083 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:57 crc kubenswrapper[5108]: I0219 00:09:57.758333 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:57 crc kubenswrapper[5108]: I0219 00:09:57.759356 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:57 crc kubenswrapper[5108]: I0219 00:09:57.759401 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:57 crc kubenswrapper[5108]: I0219 00:09:57.759419 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:57 crc kubenswrapper[5108]: E0219 00:09:57.759893 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:58 crc kubenswrapper[5108]: I0219 00:09:58.732068 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.085476 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.085788 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.086813 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.086870 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.086889 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:59 crc kubenswrapper[5108]: E0219 00:09:59.087281 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.087661 5108 scope.go:117] "RemoveContainer" containerID="03a51766f10b40427c83e726e5b8cffb7f36210d29a958e4c2445cbb435b5d41" Feb 19 00:09:59 crc kubenswrapper[5108]: E0219 00:09:59.087928 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:09:59 crc kubenswrapper[5108]: E0219 00:09:59.093083 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d450f219439\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d450f219439 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:24.000232505 +0000 UTC m=+22.966878823,LastTimestamp:2026-02-19 00:09:59.08789307 +0000 UTC m=+58.054539368,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.232458 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.232888 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.234110 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.234173 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.234211 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:59 crc kubenswrapper[5108]: E0219 00:09:59.234810 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.235226 5108 scope.go:117] "RemoveContainer" containerID="03a51766f10b40427c83e726e5b8cffb7f36210d29a958e4c2445cbb435b5d41" Feb 19 00:09:59 crc kubenswrapper[5108]: E0219 00:09:59.235545 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:09:59 crc kubenswrapper[5108]: E0219 00:09:59.243220 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d450f219439\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d450f219439 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:24.000232505 +0000 UTC m=+22.966878823,LastTimestamp:2026-02-19 00:09:59.235494774 +0000 UTC m=+58.202141112,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:59 crc kubenswrapper[5108]: E0219 00:09:59.394715 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:09:59 crc kubenswrapper[5108]: I0219 00:09:59.731811 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:00 crc kubenswrapper[5108]: I0219 00:10:00.732574 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:01 crc kubenswrapper[5108]: I0219 00:10:01.734212 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:01 crc kubenswrapper[5108]: E0219 00:10:01.897590 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:02 crc kubenswrapper[5108]: I0219 00:10:02.428599 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:02 crc kubenswrapper[5108]: I0219 00:10:02.430582 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:02 crc kubenswrapper[5108]: I0219 00:10:02.430658 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:02 crc kubenswrapper[5108]: I0219 00:10:02.430696 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:02 crc kubenswrapper[5108]: I0219 00:10:02.430742 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:10:02 crc kubenswrapper[5108]: E0219 00:10:02.449356 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:10:02 crc kubenswrapper[5108]: I0219 00:10:02.732157 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:03 crc kubenswrapper[5108]: I0219 00:10:03.730467 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:04 crc kubenswrapper[5108]: I0219 00:10:04.730278 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:05 crc kubenswrapper[5108]: I0219 00:10:05.729110 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:06 crc kubenswrapper[5108]: E0219 00:10:06.402190 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:10:06 crc kubenswrapper[5108]: I0219 00:10:06.730928 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:07 crc kubenswrapper[5108]: I0219 00:10:07.381980 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-wnkqz" Feb 19 00:10:07 crc kubenswrapper[5108]: I0219 00:10:07.391760 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-wnkqz" Feb 19 00:10:07 crc kubenswrapper[5108]: I0219 00:10:07.499374 5108 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 00:10:07 crc kubenswrapper[5108]: I0219 00:10:07.537762 5108 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 00:10:08 crc kubenswrapper[5108]: I0219 00:10:08.392809 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-21 00:05:07 +0000 UTC" deadline="2026-03-12 13:15:59.098016539 +0000 UTC" Feb 19 00:10:08 crc kubenswrapper[5108]: I0219 00:10:08.393751 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="517h5m50.704277267s" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.449956 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.450907 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.450964 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.450977 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.451086 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.462481 5108 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.462808 5108 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.462835 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.467555 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.467634 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.467647 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.467671 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.467691 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:09Z","lastTransitionTime":"2026-02-19T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.479704 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.487989 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.488043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.488054 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.488073 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.488083 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:09Z","lastTransitionTime":"2026-02-19T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.502312 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.512198 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.512259 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.512279 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.512301 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.512319 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:09Z","lastTransitionTime":"2026-02-19T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.521817 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.528590 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.528621 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.528632 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.528645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:09 crc kubenswrapper[5108]: I0219 00:10:09.528656 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:09Z","lastTransitionTime":"2026-02-19T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.536232 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.536409 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.536440 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.637169 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.738285 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.839025 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:09 crc kubenswrapper[5108]: E0219 00:10:09.939619 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.040668 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.141384 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.242382 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.342783 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.443061 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.544235 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.644694 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.745152 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.846028 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:10 crc kubenswrapper[5108]: E0219 00:10:10.947038 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.047481 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.148282 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.248661 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.348981 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.449361 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.550301 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.650429 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.750682 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.850931 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.898352 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5108]: E0219 00:10:11.951505 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.052617 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.152711 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.253774 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.354401 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.455389 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.555824 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.656388 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.757002 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: I0219 00:10:12.847525 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:12 crc kubenswrapper[5108]: I0219 00:10:12.848397 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:12 crc kubenswrapper[5108]: I0219 00:10:12.848460 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:12 crc kubenswrapper[5108]: I0219 00:10:12.848472 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.848900 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:12 crc kubenswrapper[5108]: I0219 00:10:12.849151 5108 scope.go:117] "RemoveContainer" containerID="03a51766f10b40427c83e726e5b8cffb7f36210d29a958e4c2445cbb435b5d41" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.857969 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:12 crc kubenswrapper[5108]: E0219 00:10:12.958409 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.059277 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: I0219 00:10:13.157858 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 19 00:10:13 crc kubenswrapper[5108]: I0219 00:10:13.159266 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210"} Feb 19 00:10:13 crc kubenswrapper[5108]: I0219 00:10:13.159458 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.160060 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: I0219 00:10:13.160330 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:13 crc kubenswrapper[5108]: I0219 00:10:13.160373 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:13 crc kubenswrapper[5108]: I0219 00:10:13.160387 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.160998 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.260976 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.361980 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.462478 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.563525 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.664514 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.765598 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.865988 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:13 crc kubenswrapper[5108]: E0219 00:10:13.966758 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.067651 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.168449 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.269028 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.370205 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.471534 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.571658 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.672319 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.773440 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.874068 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:14 crc kubenswrapper[5108]: E0219 00:10:14.974641 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.074751 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.166216 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.167164 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.169128 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" exitCode=255 Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.169189 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210"} Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.169232 5108 scope.go:117] "RemoveContainer" containerID="03a51766f10b40427c83e726e5b8cffb7f36210d29a958e4c2445cbb435b5d41" Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.169499 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.170868 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.171016 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.171650 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.172270 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:15 crc kubenswrapper[5108]: I0219 00:10:15.172623 5108 scope.go:117] "RemoveContainer" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.172964 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.175087 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.275488 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.375900 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.476958 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.577502 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.678555 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.779524 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.880570 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:15 crc kubenswrapper[5108]: E0219 00:10:15.981890 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.084470 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: I0219 00:10:16.174638 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.185890 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.286401 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.387196 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.487815 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.588436 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.689518 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.790681 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.891383 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:16 crc kubenswrapper[5108]: E0219 00:10:16.992553 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.092931 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.193636 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.294742 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.395815 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.497003 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.598133 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.699399 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.800019 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:17 crc kubenswrapper[5108]: E0219 00:10:17.900193 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.001102 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.101851 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.202699 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.303416 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.404642 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.505284 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.605711 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.706891 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.807176 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:18 crc kubenswrapper[5108]: E0219 00:10:18.907731 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.008080 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.108513 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.209788 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.232379 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.232819 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.235887 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.235976 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.235995 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.236776 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.237230 5108 scope.go:117] "RemoveContainer" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.237664 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.310671 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.411146 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.512191 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.612641 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.713062 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.813577 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.840931 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.846672 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.846744 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.846759 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.846782 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.846799 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:19Z","lastTransitionTime":"2026-02-19T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.860674 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.872249 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.872316 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.872335 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.872358 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.872374 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:19Z","lastTransitionTime":"2026-02-19T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.885976 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.895284 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.895337 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.895353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.895375 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.895390 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:19Z","lastTransitionTime":"2026-02-19T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.910267 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.923022 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.923090 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.923113 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.923139 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:19 crc kubenswrapper[5108]: I0219 00:10:19.923159 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:19Z","lastTransitionTime":"2026-02-19T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.937283 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.937412 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:10:19 crc kubenswrapper[5108]: E0219 00:10:19.937445 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.037919 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.138674 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.239601 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.340465 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.441514 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.542531 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.642918 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.743281 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.843927 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:20 crc kubenswrapper[5108]: I0219 00:10:20.847419 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:20 crc kubenswrapper[5108]: I0219 00:10:20.849000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:20 crc kubenswrapper[5108]: I0219 00:10:20.849088 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:20 crc kubenswrapper[5108]: I0219 00:10:20.849108 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.849640 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:20 crc kubenswrapper[5108]: E0219 00:10:20.945040 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.046039 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.146737 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.247318 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.347816 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.448765 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.549622 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.649766 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.750673 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.851316 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.899607 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5108]: E0219 00:10:21.951908 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: I0219 00:10:22.034572 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.052447 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.153256 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.253895 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.355037 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.455467 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.556646 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.656995 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.758075 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.858898 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:22 crc kubenswrapper[5108]: E0219 00:10:22.960127 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.061023 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: I0219 00:10:23.160400 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:23 crc kubenswrapper[5108]: I0219 00:10:23.160752 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.161294 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: I0219 00:10:23.161915 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:23 crc kubenswrapper[5108]: I0219 00:10:23.161981 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:23 crc kubenswrapper[5108]: I0219 00:10:23.161994 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.162526 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:23 crc kubenswrapper[5108]: I0219 00:10:23.162824 5108 scope.go:117] "RemoveContainer" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.163108 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.261444 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.362039 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.462461 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.563646 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.663980 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.765115 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.865887 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:23 crc kubenswrapper[5108]: E0219 00:10:23.966990 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.067789 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.168238 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.269292 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.370162 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.471174 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.571807 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.673025 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.773645 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.874573 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:24 crc kubenswrapper[5108]: E0219 00:10:24.975267 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.076338 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.177379 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.278460 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.379660 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.480204 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.580766 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.680916 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.781885 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.882437 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:25 crc kubenswrapper[5108]: E0219 00:10:25.983100 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.084307 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.185406 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.285614 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.386697 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.487421 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.588171 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.688331 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.789365 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.889802 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:26 crc kubenswrapper[5108]: E0219 00:10:26.990914 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.091048 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.191239 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.291927 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.392371 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.493322 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.593715 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.694401 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.795024 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: I0219 00:10:27.847572 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:27 crc kubenswrapper[5108]: I0219 00:10:27.848973 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:27 crc kubenswrapper[5108]: I0219 00:10:27.849035 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:27 crc kubenswrapper[5108]: I0219 00:10:27.849048 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.849637 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.895407 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:27 crc kubenswrapper[5108]: E0219 00:10:27.995868 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.096601 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.197547 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.298005 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.398797 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.500042 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.601436 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.701856 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.803027 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:28 crc kubenswrapper[5108]: E0219 00:10:28.904191 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.005042 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.105984 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.207068 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.307214 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.408252 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.509319 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.610478 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.710836 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.811974 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:29 crc kubenswrapper[5108]: E0219 00:10:29.912322 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.013167 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.113627 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.214711 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.254994 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.263540 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.263626 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.263648 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.263674 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.263698 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:30Z","lastTransitionTime":"2026-02-19T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.280680 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.285037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.285108 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.285129 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.285155 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.285174 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:30Z","lastTransitionTime":"2026-02-19T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.300781 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.309563 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.309638 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.309658 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.309685 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.309708 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:30Z","lastTransitionTime":"2026-02-19T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.323754 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.329144 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.329211 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.329232 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.329258 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:30 crc kubenswrapper[5108]: I0219 00:10:30.329280 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:30Z","lastTransitionTime":"2026-02-19T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.345274 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.345580 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.345629 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.446324 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.546727 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.647129 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.747599 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.848705 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:30 crc kubenswrapper[5108]: E0219 00:10:30.949605 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.050080 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.151162 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.251395 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.351670 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.451992 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.552794 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.653659 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.753753 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.854372 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.900716 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5108]: E0219 00:10:31.954985 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.055696 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.156378 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.256784 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.357096 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.457308 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.557638 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.658832 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.759500 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.860008 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:32 crc kubenswrapper[5108]: E0219 00:10:32.960558 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.061547 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.162337 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.262475 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.362781 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.464051 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.564666 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.665154 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.765997 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: I0219 00:10:33.847374 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:33 crc kubenswrapper[5108]: I0219 00:10:33.848559 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:33 crc kubenswrapper[5108]: I0219 00:10:33.848615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:33 crc kubenswrapper[5108]: I0219 00:10:33.848641 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.849458 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.866100 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:33 crc kubenswrapper[5108]: E0219 00:10:33.967203 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.068426 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.169053 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.269409 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.369866 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.471097 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.572125 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.673103 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.773301 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: I0219 00:10:34.847662 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:34 crc kubenswrapper[5108]: I0219 00:10:34.849186 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:34 crc kubenswrapper[5108]: I0219 00:10:34.849253 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:34 crc kubenswrapper[5108]: I0219 00:10:34.849276 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.850251 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:34 crc kubenswrapper[5108]: I0219 00:10:34.850613 5108 scope.go:117] "RemoveContainer" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.850994 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.873422 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:34 crc kubenswrapper[5108]: E0219 00:10:34.974316 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.074748 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.175161 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.275458 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.376078 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.476507 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.576812 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.678113 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.746109 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.750543 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.763637 5108 apiserver.go:52] "Watching apiserver" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.766451 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.768062 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.768473 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gxmww","openshift-multus/multus-v42mj","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-machine-config-operator/machine-config-daemon-k5zp6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/network-metrics-daemon-2clv5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-image-registry/node-ca-vsm7k","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-node-vk6d6","openshift-dns/node-resolver-kb56v"] Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.770330 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.771307 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.771509 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.772353 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.773497 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.773554 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.773915 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.774096 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.774201 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.774545 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.774889 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.774974 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.775027 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.775232 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.777256 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.779692 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.779818 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.779877 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.783205 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.783205 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.783291 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.783310 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.783347 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.783363 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:35Z","lastTransitionTime":"2026-02-19T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.788584 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.788899 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.789324 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.792627 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.792662 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.794824 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.795578 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.795769 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.804669 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.810558 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.815479 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.815706 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.815731 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.815801 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.816053 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.816410 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.820177 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.820432 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.820679 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.823337 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.825530 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.825692 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.826376 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.826751 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.827911 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.827955 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.828510 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.828954 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.830004 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.831391 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.831597 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.831717 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.832370 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.833477 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.835504 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.835669 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.835828 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.836282 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.836456 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.836565 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.837787 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.837891 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.851959 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.853344 5108 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.869195 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.887568 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.888356 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.888700 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.889542 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.889656 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.889759 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.889855 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890178 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:35Z","lastTransitionTime":"2026-02-19T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890304 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890486 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890532 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890561 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890635 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890682 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890704 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890731 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.890836 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.891103 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.891727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.891871 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.892082 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.891767 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.893061 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.892397 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.892741 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.892786 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.892896 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.893853 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.893574 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.893737 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.894306 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.894420 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.894544 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.894652 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.894798 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.894373 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895032 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895072 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895356 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895444 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895532 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895623 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895657 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895756 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895794 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895903 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.895969 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896013 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896119 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896204 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896229 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896259 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896309 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896360 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896523 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896569 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896581 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896643 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896691 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896733 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896764 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.896884 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897150 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897240 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897296 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897355 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897402 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897468 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897506 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897549 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897577 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897601 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897624 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897650 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897754 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.897828 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.898057 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.898132 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.898174 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.898540 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.898420 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.898674 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.899137 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.899321 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.898039 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.899988 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.900083 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.900169 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.900472 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.900492 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.900845 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.900789 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.901039 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.901398 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.901547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.901639 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.901861 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.901865 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.901896 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.901896 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.902125 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.902288 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.902279 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.903102 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.903211 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.903285 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.903363 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.903554 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.903794 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.903848 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.903873 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904137 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904211 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904267 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904311 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904344 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904372 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904406 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904500 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904557 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904635 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904675 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904705 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904739 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904772 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904805 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904887 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904990 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.905087 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.904949 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.910777 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.910807 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.905073 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.905475 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.909503 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.909521 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.909696 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.909794 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.910014 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.909975 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.910139 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.910377 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.910591 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.910719 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.910695 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911057 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911118 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911144 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911156 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911177 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911233 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911237 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911270 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911522 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911526 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911682 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911713 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911744 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911772 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911793 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911816 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911844 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911868 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911890 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911915 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911909 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911967 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911993 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912018 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912046 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912073 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912076 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912089 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912096 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912261 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912463 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912501 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912500 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912530 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912559 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912579 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912599 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912618 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912638 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912662 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912689 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912713 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912742 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912770 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912799 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912821 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912838 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912855 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912877 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912900 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912920 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912973 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912992 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913010 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913029 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913047 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913067 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913090 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913108 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913130 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913149 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913170 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913195 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913213 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913233 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913334 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913355 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913373 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913393 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913417 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913442 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913463 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913484 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913502 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913520 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913538 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913561 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913579 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913597 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913617 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913637 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913661 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913680 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913698 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913722 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913752 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913777 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913803 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913825 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913846 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913866 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913888 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913921 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913965 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913983 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914005 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914027 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914099 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914117 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914136 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914155 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914175 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914193 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914237 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914203 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914260 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914285 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914305 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914327 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914749 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914826 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914881 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914972 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915016 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915098 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915142 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915183 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915227 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915272 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915315 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915359 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915404 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915447 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915487 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915531 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915632 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915676 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915722 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915766 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916511 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916562 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916607 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916653 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916697 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916835 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916931 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917014 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917104 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917705 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917971 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.918049 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.918095 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.918183 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.918270 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.918327 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.918381 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.918422 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.918473 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912523 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.912881 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.911250 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913097 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913158 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913836 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913977 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.913998 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914075 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914283 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.914814 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915244 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915513 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915661 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915681 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.915685 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916059 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916116 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916352 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.916590 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917044 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917406 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917475 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917525 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.917680 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.928563 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929120 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929198 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929336 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929428 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929486 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929658 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5da66974-30d5-4571-b5df-d264febc8a9b-tmp-dir\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929717 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-cnibin\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929757 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-os-release\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929797 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-cni-bin\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929878 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-systemd-units\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929990 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-socket-dir-parent\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930064 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-kubelet\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930117 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-bin\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930168 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-netd\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930230 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/995cb3be-1541-4090-83fe-8bf1a8259f0d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930285 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/97553d38-332c-4cc9-8732-5363a62dde8c-serviceca\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930353 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-system-cni-dir\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930410 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930495 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930547 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2dh5\" (UniqueName: \"kubernetes.io/projected/c556da79-b025-425f-b2cd-ac55950c66cc-kube-api-access-q2dh5\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930609 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930652 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8ba935e-bb01-466a-8b94-8b0c15e535b1-cni-binary-copy\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930690 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930731 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930783 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-systemd\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930840 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-node-log\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930912 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930990 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/995cb3be-1541-4090-83fe-8bf1a8259f0d-rootfs\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931051 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931096 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-os-release\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931136 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-var-lib-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931179 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdz5q\" (UniqueName: \"kubernetes.io/projected/7f4459ce-0bd5-493a-813f-977d6e26f440-kube-api-access-cdz5q\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931225 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2lkp\" (UniqueName: \"kubernetes.io/projected/995cb3be-1541-4090-83fe-8bf1a8259f0d-kube-api-access-z2lkp\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931264 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97553d38-332c-4cc9-8732-5363a62dde8c-host\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931315 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-hostroot\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931365 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-conf-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931428 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-slash\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931477 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931538 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931598 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-netns\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931671 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931731 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931790 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931834 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-netns\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931877 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-env-overrides\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931981 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c556da79-b025-425f-b2cd-ac55950c66cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932043 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-system-cni-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932102 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932159 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932204 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f4459ce-0bd5-493a-813f-977d6e26f440-ovn-node-metrics-cert\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932245 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-script-lib\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932288 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-cni-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932331 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-daemon-config\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932373 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-multus-certs\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932419 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-ovn-kubernetes\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932485 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cnibin\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932536 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cni-binary-copy\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932583 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf757\" (UniqueName: \"kubernetes.io/projected/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-kube-api-access-sf757\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932631 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/995cb3be-1541-4090-83fe-8bf1a8259f0d-proxy-tls\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932686 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-etc-kubernetes\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932755 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-etc-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932856 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932929 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933036 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933112 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-config\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933180 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-k8s-cni-cncf-io\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933242 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbf9w\" (UniqueName: \"kubernetes.io/projected/766a3580-a7a9-49f7-8948-2d949558d2d2-kube-api-access-gbf9w\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933305 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-cni-multus\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933364 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-kubelet\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933414 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbs9\" (UniqueName: \"kubernetes.io/projected/c8ba935e-bb01-466a-8b94-8b0c15e535b1-kube-api-access-nsbs9\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933478 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933529 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-log-socket\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933580 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klkhn\" (UniqueName: \"kubernetes.io/projected/5da66974-30d5-4571-b5df-d264febc8a9b-kube-api-access-klkhn\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933637 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcgpw\" (UniqueName: \"kubernetes.io/projected/97553d38-332c-4cc9-8732-5363a62dde8c-kube-api-access-lcgpw\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933730 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933782 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-ovn\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933835 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933894 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5da66974-30d5-4571-b5df-d264febc8a9b-hosts-file\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934019 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934091 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934277 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934332 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934368 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934400 5108 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934435 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934469 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934493 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934519 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934544 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934569 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934593 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934616 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934642 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934670 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934697 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934723 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934748 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934773 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934798 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934823 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934848 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934873 5108 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934899 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934925 5108 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934985 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935011 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935037 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935504 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935530 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935556 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935583 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935607 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935630 5108 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935654 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935678 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935702 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935727 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935751 5108 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935775 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935800 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935825 5108 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935850 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935873 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935898 5108 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935923 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935974 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935998 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936024 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936048 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936073 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936097 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936123 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936148 5108 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936173 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936199 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936225 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936250 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936273 5108 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936300 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936326 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936351 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936376 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936402 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936428 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936456 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936482 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936507 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936533 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936558 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936583 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936609 5108 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936636 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936661 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936686 5108 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936708 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936729 5108 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936753 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936778 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936803 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936828 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936853 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936878 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936906 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936931 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936981 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937005 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937027 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937051 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937075 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937107 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937132 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937153 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937179 5108 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937204 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937229 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937252 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937277 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937301 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929346 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929642 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.929648 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930025 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930690 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.930847 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931039 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931111 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931267 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931529 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.937785 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:36.437677051 +0000 UTC m=+95.404323419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937998 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931574 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931602 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.931641 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932134 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932361 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932391 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932514 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.932719 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.933109 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934621 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.934872 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935328 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.935502 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936206 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936441 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936596 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.936956 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937055 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.937275 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.937402 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.938521 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:36.438499044 +0000 UTC m=+95.405145362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.939220 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.939472 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.940456 5108 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.940743 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.940798 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.941042 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.941236 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:36.441172265 +0000 UTC m=+95.407818573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.942017 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.942465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.944067 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.944081 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.944792 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.946353 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.946382 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.946473 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.946633 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.951391 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.951688 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.952400 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.955856 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.956248 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.956648 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.957141 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.957740 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.957778 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.957951 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.958326 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.958417 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.959486 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.959521 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.959763 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.959825 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.959965 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.960079 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.960192 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.960298 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.959729 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.960358 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.960485 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.961503 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.962131 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.962227 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.962339 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.962469 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.961199 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.964422 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.964498 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.965115 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.965148 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.965142 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.965168 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.965191 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.965271 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:36.465236918 +0000 UTC m=+95.431883236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.965471 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.965735 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.965771 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.965996 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.966132 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.966400 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.972307 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.972351 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.972454 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.973047 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.973539 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.973696 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.974323 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.974355 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.974556 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.974570 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.974755 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975632 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.974775 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975494 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.976387 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.976431 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.977053 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.978960 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.979422 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975815 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975232 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975266 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975324 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975517 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975543 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.975608 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.974753 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.975810 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.980222 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.980249 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.980343 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:36.480312171 +0000 UTC m=+95.446958519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.980659 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.980795 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.980909 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.981069 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.981268 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.982816 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.983816 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.984664 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.984769 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.984816 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.984982 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.985035 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.985108 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.985271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.985484 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.986084 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.986345 5108 scope.go:117] "RemoveContainer" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" Feb 19 00:10:35 crc kubenswrapper[5108]: E0219 00:10:35.986565 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.988729 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.989020 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.991133 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.991956 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.991988 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.992671 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v42mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ba935e-bb01-466a-8b94-8b0c15e535b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nsbs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v42mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.993090 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.993143 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.993162 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.993188 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.993209 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:35Z","lastTransitionTime":"2026-02-19T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:35 crc kubenswrapper[5108]: I0219 00:10:35.993429 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.002128 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.005878 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.013985 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f4459ce-0bd5-493a-813f-977d6e26f440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vk6d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.022086 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.025766 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c556da79-b025-425f-b2cd-ac55950c66cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bbrq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.034885 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b62e9d-81e7-4ca1-a374-a8e89f8afa24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.036956 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.037810 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5da66974-30d5-4571-b5df-d264febc8a9b-tmp-dir\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.037848 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-cnibin\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.037873 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-os-release\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038146 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-cnibin\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038274 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5da66974-30d5-4571-b5df-d264febc8a9b-tmp-dir\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038333 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-os-release\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038338 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-cni-bin\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038370 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-cni-bin\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038387 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-systemd-units\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038412 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-socket-dir-parent\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038434 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-kubelet\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038454 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-bin\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038513 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-netd\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038531 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-kubelet\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038544 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/995cb3be-1541-4090-83fe-8bf1a8259f0d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038566 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/97553d38-332c-4cc9-8732-5363a62dde8c-serviceca\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038590 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-system-cni-dir\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038648 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q2dh5\" (UniqueName: \"kubernetes.io/projected/c556da79-b025-425f-b2cd-ac55950c66cc-kube-api-access-q2dh5\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038678 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8ba935e-bb01-466a-8b94-8b0c15e535b1-cni-binary-copy\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038703 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038725 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038717 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-socket-dir-parent\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038747 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-systemd\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038763 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-systemd-units\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038769 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-node-log\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038793 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/995cb3be-1541-4090-83fe-8bf1a8259f0d-rootfs\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038814 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-bin\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038817 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-os-release\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038839 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-netd\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038844 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-var-lib-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038870 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdz5q\" (UniqueName: \"kubernetes.io/projected/7f4459ce-0bd5-493a-813f-977d6e26f440-kube-api-access-cdz5q\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038910 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2lkp\" (UniqueName: \"kubernetes.io/projected/995cb3be-1541-4090-83fe-8bf1a8259f0d-kube-api-access-z2lkp\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038961 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97553d38-332c-4cc9-8732-5363a62dde8c-host\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038982 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-hostroot\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039006 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-conf-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039026 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-slash\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039046 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039067 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039087 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-netns\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039108 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039130 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-netns\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039150 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-env-overrides\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039174 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c556da79-b025-425f-b2cd-ac55950c66cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-system-cni-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039239 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f4459ce-0bd5-493a-813f-977d6e26f440-ovn-node-metrics-cert\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039262 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-script-lib\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039282 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-cni-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039309 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-daemon-config\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039330 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-multus-certs\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039351 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-ovn-kubernetes\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039372 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cnibin\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039405 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cni-binary-copy\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039425 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sf757\" (UniqueName: \"kubernetes.io/projected/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-kube-api-access-sf757\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039444 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/995cb3be-1541-4090-83fe-8bf1a8259f0d-proxy-tls\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039463 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-etc-kubernetes\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039485 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-etc-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039508 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039532 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-config\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039554 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-k8s-cni-cncf-io\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039575 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbf9w\" (UniqueName: \"kubernetes.io/projected/766a3580-a7a9-49f7-8948-2d949558d2d2-kube-api-access-gbf9w\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039595 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-cni-multus\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039614 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-kubelet\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039633 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nsbs9\" (UniqueName: \"kubernetes.io/projected/c8ba935e-bb01-466a-8b94-8b0c15e535b1-kube-api-access-nsbs9\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039657 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039679 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-log-socket\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039684 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-system-cni-dir\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039700 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-klkhn\" (UniqueName: \"kubernetes.io/projected/5da66974-30d5-4571-b5df-d264febc8a9b-kube-api-access-klkhn\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039725 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcgpw\" (UniqueName: \"kubernetes.io/projected/97553d38-332c-4cc9-8732-5363a62dde8c-kube-api-access-lcgpw\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039750 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-ovn\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039782 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5da66974-30d5-4571-b5df-d264febc8a9b-hosts-file\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039802 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039873 5108 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039887 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039902 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039917 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039930 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039961 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039973 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039985 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039997 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040010 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040023 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040035 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040048 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040060 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040096 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040109 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040121 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040133 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040145 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040158 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040170 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040181 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040194 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040205 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040218 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040231 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040243 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040255 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040269 5108 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040281 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040294 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040306 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040318 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040331 5108 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040345 5108 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040358 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040371 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040383 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040395 5108 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040407 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040420 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040432 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040444 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040455 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040467 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040481 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040492 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040505 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040517 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040529 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040541 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040553 5108 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040566 5108 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040578 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040589 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040600 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040612 5108 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040623 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040637 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040649 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040662 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040674 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040685 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040697 5108 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040708 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040719 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040732 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040744 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040755 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040768 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040780 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040791 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040802 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040813 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040823 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040835 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040847 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040859 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040871 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040872 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040883 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040894 5108 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040907 5108 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040919 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040930 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040958 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040971 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040983 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040994 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041005 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041016 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041027 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041038 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041049 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041060 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041072 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041083 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041093 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041105 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041116 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041126 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041137 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041149 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041160 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041172 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041183 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041198 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041209 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041221 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041232 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041244 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041255 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041266 5108 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041288 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041299 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041310 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041321 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041333 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041345 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041356 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041367 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041379 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041390 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041401 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041412 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.040406 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8ba935e-bb01-466a-8b94-8b0c15e535b1-cni-binary-copy\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041561 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-system-cni-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.041644 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041412 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.041699 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs podName:766a3580-a7a9-49f7-8948-2d949558d2d2 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:36.541681072 +0000 UTC m=+95.508327400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs") pod "network-metrics-daemon-2clv5" (UID: "766a3580-a7a9-49f7-8948-2d949558d2d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041744 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-systemd\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041781 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-node-log\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041815 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/995cb3be-1541-4090-83fe-8bf1a8259f0d-rootfs\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.038792 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.041978 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-os-release\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.042023 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-var-lib-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.042334 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-k8s-cni-cncf-io\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.042722 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/995cb3be-1541-4090-83fe-8bf1a8259f0d-mcd-auth-proxy-config\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.043068 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-script-lib\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.043238 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-cni-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.043313 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97553d38-332c-4cc9-8732-5363a62dde8c-host\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.043389 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-hostroot\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.043446 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-conf-dir\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.043485 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-slash\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.044057 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c8ba935e-bb01-466a-8b94-8b0c15e535b1-multus-daemon-config\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.044139 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-multus-certs\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.044212 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-ovn-kubernetes\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.044246 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cnibin\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.044578 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.044665 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-netns\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045024 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-cni-binary-copy\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045035 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-env-overrides\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045077 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045106 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-run-netns\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045223 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-cni-multus\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045251 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-host-var-lib-kubelet\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045244 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045352 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-log-socket\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045392 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ba935e-bb01-466a-8b94-8b0c15e535b1-etc-kubernetes\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.039657 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/97553d38-332c-4cc9-8732-5363a62dde8c-serviceca\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045431 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-ovn\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.047423 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.047920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f4459ce-0bd5-493a-813f-977d6e26f440-ovn-node-metrics-cert\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.048108 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5da66974-30d5-4571-b5df-d264febc8a9b-hosts-file\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.048152 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-config\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.048487 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.045360 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.048788 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.048842 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-etc-openvswitch\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.058708 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2dh5\" (UniqueName: \"kubernetes.io/projected/c556da79-b025-425f-b2cd-ac55950c66cc-kube-api-access-q2dh5\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.061489 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/995cb3be-1541-4090-83fe-8bf1a8259f0d-proxy-tls\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.062968 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.063751 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c556da79-b025-425f-b2cd-ac55950c66cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-bbrq4\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.068983 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-klkhn\" (UniqueName: \"kubernetes.io/projected/5da66974-30d5-4571-b5df-d264febc8a9b-kube-api-access-klkhn\") pod \"node-resolver-kb56v\" (UID: \"5da66974-30d5-4571-b5df-d264febc8a9b\") " pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.069402 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbf9w\" (UniqueName: \"kubernetes.io/projected/766a3580-a7a9-49f7-8948-2d949558d2d2-kube-api-access-gbf9w\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.069825 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdz5q\" (UniqueName: \"kubernetes.io/projected/7f4459ce-0bd5-493a-813f-977d6e26f440-kube-api-access-cdz5q\") pod \"ovnkube-node-vk6d6\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.070314 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsbs9\" (UniqueName: \"kubernetes.io/projected/c8ba935e-bb01-466a-8b94-8b0c15e535b1-kube-api-access-nsbs9\") pod \"multus-v42mj\" (UID: \"c8ba935e-bb01-466a-8b94-8b0c15e535b1\") " pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.070402 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcgpw\" (UniqueName: \"kubernetes.io/projected/97553d38-332c-4cc9-8732-5363a62dde8c-kube-api-access-lcgpw\") pod \"node-ca-vsm7k\" (UID: \"97553d38-332c-4cc9-8732-5363a62dde8c\") " pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.070889 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf757\" (UniqueName: \"kubernetes.io/projected/ffe88610-b8e8-4a54-9e50-62ebbfd5d6db-kube-api-access-sf757\") pod \"multus-additional-cni-plugins-gxmww\" (UID: \"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\") " pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.073434 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.076637 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2lkp\" (UniqueName: \"kubernetes.io/projected/995cb3be-1541-4090-83fe-8bf1a8259f0d-kube-api-access-z2lkp\") pod \"machine-config-daemon-k5zp6\" (UID: \"995cb3be-1541-4090-83fe-8bf1a8259f0d\") " pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.081708 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.084065 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.094084 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gxmww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.095014 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.095057 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.095072 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.095088 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.095101 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.102227 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.111059 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5abda7b-3e5f-4e9d-af16-8bbc3c1086b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.117263 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.122885 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:36 crc kubenswrapper[5108]: set -o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: source /etc/kubernetes/apiserver-url.env Feb 19 00:10:36 crc kubenswrapper[5108]: else Feb 19 00:10:36 crc kubenswrapper[5108]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 19 00:10:36 crc kubenswrapper[5108]: exit 1 Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.123979 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.124909 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.134125 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:36 crc kubenswrapper[5108]: W0219 00:10:36.136314 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-c3b32536ddb0b0474c656806fd0bef3a695c29b55dc9f16418809f76b9d9562c WatchSource:0}: Error finding container c3b32536ddb0b0474c656806fd0bef3a695c29b55dc9f16418809f76b9d9562c: Status 404 returned error can't find the container with id c3b32536ddb0b0474c656806fd0bef3a695c29b55dc9f16418809f76b9d9562c Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.140429 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.140579 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: set -o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: source "/env/_master" Feb 19 00:10:36 crc kubenswrapper[5108]: set +o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 19 00:10:36 crc kubenswrapper[5108]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 19 00:10:36 crc kubenswrapper[5108]: ho_enable="--enable-hybrid-overlay" Feb 19 00:10:36 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 19 00:10:36 crc kubenswrapper[5108]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 19 00:10:36 crc kubenswrapper[5108]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 19 00:10:36 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:10:36 crc kubenswrapper[5108]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --webhook-host=127.0.0.1 \ Feb 19 00:10:36 crc kubenswrapper[5108]: --webhook-port=9743 \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${ho_enable} \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-interconnect \ Feb 19 00:10:36 crc kubenswrapper[5108]: --disable-approver \ Feb 19 00:10:36 crc kubenswrapper[5108]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --wait-for-kubernetes-api=200s \ Feb 19 00:10:36 crc kubenswrapper[5108]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.144004 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.145155 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: set -o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: source "/env/_master" Feb 19 00:10:36 crc kubenswrapper[5108]: set +o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 19 00:10:36 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:10:36 crc kubenswrapper[5108]: --disable-webhook \ Feb 19 00:10:36 crc kubenswrapper[5108]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: W0219 00:10:36.145431 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-ca1e7a54e78a4c168d5ac530eb081eb20ed267de5741fb818c05e06088dcc900 WatchSource:0}: Error finding container ca1e7a54e78a4c168d5ac530eb081eb20ed267de5741fb818c05e06088dcc900: Status 404 returned error can't find the container with id ca1e7a54e78a4c168d5ac530eb081eb20ed267de5741fb818c05e06088dcc900 Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.147137 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.148461 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.150025 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 19 00:10:36 crc kubenswrapper[5108]: W0219 00:10:36.153868 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod995cb3be_1541_4090_83fe_8bf1a8259f0d.slice/crio-792f699fc83f9114a3f03cf9925f4e413188dd4d4b012ae2ccff261b26f826ad WatchSource:0}: Error finding container 792f699fc83f9114a3f03cf9925f4e413188dd4d4b012ae2ccff261b26f826ad: Status 404 returned error can't find the container with id 792f699fc83f9114a3f03cf9925f4e413188dd4d4b012ae2ccff261b26f826ad Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.154427 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsm7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97553d38-332c-4cc9-8732-5363a62dde8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcgpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsm7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.157537 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gxmww" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.164385 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2lkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.167639 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2lkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.169503 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.170151 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vsm7k" Feb 19 00:10:36 crc kubenswrapper[5108]: W0219 00:10:36.171764 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffe88610_b8e8_4a54_9e50_62ebbfd5d6db.slice/crio-fca5d3f3036ac6c156ce2d28dd16ec27b6634e7c71c1fd4e684c657cc5edc6c8 WatchSource:0}: Error finding container fca5d3f3036ac6c156ce2d28dd16ec27b6634e7c71c1fd4e684c657cc5edc6c8: Status 404 returned error can't find the container with id fca5d3f3036ac6c156ce2d28dd16ec27b6634e7c71c1fd4e684c657cc5edc6c8 Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.173365 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e615cef0-ed9e-4605-b2dc-a11e69dec261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.176147 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sf757,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gxmww_openshift-multus(ffe88610-b8e8-4a54-9e50-62ebbfd5d6db): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.177241 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gxmww" podUID="ffe88610-b8e8-4a54-9e50-62ebbfd5d6db" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.179164 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v42mj" Feb 19 00:10:36 crc kubenswrapper[5108]: W0219 00:10:36.180272 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97553d38_332c_4cc9_8732_5363a62dde8c.slice/crio-8b184fac35141142ba8d628b6fe45dad1c7775e997b6e8bd59b72fadd35ec007 WatchSource:0}: Error finding container 8b184fac35141142ba8d628b6fe45dad1c7775e997b6e8bd59b72fadd35ec007: Status 404 returned error can't find the container with id 8b184fac35141142ba8d628b6fe45dad1c7775e997b6e8bd59b72fadd35ec007 Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.186486 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 19 00:10:36 crc kubenswrapper[5108]: while [ true ]; Feb 19 00:10:36 crc kubenswrapper[5108]: do Feb 19 00:10:36 crc kubenswrapper[5108]: for f in $(ls /tmp/serviceca); do Feb 19 00:10:36 crc kubenswrapper[5108]: echo $f Feb 19 00:10:36 crc kubenswrapper[5108]: ca_file_path="/tmp/serviceca/${f}" Feb 19 00:10:36 crc kubenswrapper[5108]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 19 00:10:36 crc kubenswrapper[5108]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 19 00:10:36 crc kubenswrapper[5108]: if [ -e "${reg_dir_path}" ]; then Feb 19 00:10:36 crc kubenswrapper[5108]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 19 00:10:36 crc kubenswrapper[5108]: else Feb 19 00:10:36 crc kubenswrapper[5108]: mkdir $reg_dir_path Feb 19 00:10:36 crc kubenswrapper[5108]: cp $ca_file_path $reg_dir_path/ca.crt Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: for d in $(ls /etc/docker/certs.d); do Feb 19 00:10:36 crc kubenswrapper[5108]: echo $d Feb 19 00:10:36 crc kubenswrapper[5108]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 19 00:10:36 crc kubenswrapper[5108]: reg_conf_path="/tmp/serviceca/${dp}" Feb 19 00:10:36 crc kubenswrapper[5108]: if [ ! -e "${reg_conf_path}" ]; then Feb 19 00:10:36 crc kubenswrapper[5108]: rm -rf /etc/docker/certs.d/$d Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 60 & wait ${!} Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcgpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-vsm7k_openshift-image-registry(97553d38-332c-4cc9-8732-5363a62dde8c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.187794 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-vsm7k" podUID="97553d38-332c-4cc9-8732-5363a62dde8c" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.192254 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: W0219 00:10:36.192579 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8ba935e_bb01_466a_8b94_8b0c15e535b1.slice/crio-4fff08eff2406da2c14cbcc8b5a5d5b4f840f1d31c0ec2beb9c3dca4e17b30f1 WatchSource:0}: Error finding container 4fff08eff2406da2c14cbcc8b5a5d5b4f840f1d31c0ec2beb9c3dca4e17b30f1: Status 404 returned error can't find the container with id 4fff08eff2406da2c14cbcc8b5a5d5b4f840f1d31c0ec2beb9c3dca4e17b30f1 Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.194549 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.194878 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 19 00:10:36 crc kubenswrapper[5108]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 19 00:10:36 crc kubenswrapper[5108]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-v42mj_openshift-multus(c8ba935e-bb01-466a-8b94-8b0c15e535b1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.197238 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-v42mj" podUID="c8ba935e-bb01-466a-8b94-8b0c15e535b1" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.198489 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.198597 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.198679 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.198770 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.198792 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.205983 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb56v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da66974-30d5-4571-b5df-d264febc8a9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klkhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb56v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.209019 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 19 00:10:36 crc kubenswrapper[5108]: apiVersion: v1 Feb 19 00:10:36 crc kubenswrapper[5108]: clusters: Feb 19 00:10:36 crc kubenswrapper[5108]: - cluster: Feb 19 00:10:36 crc kubenswrapper[5108]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 19 00:10:36 crc kubenswrapper[5108]: server: https://api-int.crc.testing:6443 Feb 19 00:10:36 crc kubenswrapper[5108]: name: default-cluster Feb 19 00:10:36 crc kubenswrapper[5108]: contexts: Feb 19 00:10:36 crc kubenswrapper[5108]: - context: Feb 19 00:10:36 crc kubenswrapper[5108]: cluster: default-cluster Feb 19 00:10:36 crc kubenswrapper[5108]: namespace: default Feb 19 00:10:36 crc kubenswrapper[5108]: user: default-auth Feb 19 00:10:36 crc kubenswrapper[5108]: name: default-context Feb 19 00:10:36 crc kubenswrapper[5108]: current-context: default-context Feb 19 00:10:36 crc kubenswrapper[5108]: kind: Config Feb 19 00:10:36 crc kubenswrapper[5108]: preferences: {} Feb 19 00:10:36 crc kubenswrapper[5108]: users: Feb 19 00:10:36 crc kubenswrapper[5108]: - name: default-auth Feb 19 00:10:36 crc kubenswrapper[5108]: user: Feb 19 00:10:36 crc kubenswrapper[5108]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:10:36 crc kubenswrapper[5108]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:10:36 crc kubenswrapper[5108]: EOF Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cdz5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-vk6d6_openshift-ovn-kubernetes(7f4459ce-0bd5-493a-813f-977d6e26f440): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.210916 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.213414 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kb56v" Feb 19 00:10:36 crc kubenswrapper[5108]: W0219 00:10:36.223563 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5da66974_30d5_4571_b5df_d264febc8a9b.slice/crio-68f31f6cfa815246242bb612412956bd18c5337ba250d167af86b740869a57e7 WatchSource:0}: Error finding container 68f31f6cfa815246242bb612412956bd18c5337ba250d167af86b740869a57e7: Status 404 returned error can't find the container with id 68f31f6cfa815246242bb612412956bd18c5337ba250d167af86b740869a57e7 Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.225480 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:36 crc kubenswrapper[5108]: set -uo pipefail Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 19 00:10:36 crc kubenswrapper[5108]: HOSTS_FILE="/etc/hosts" Feb 19 00:10:36 crc kubenswrapper[5108]: TEMP_FILE="/tmp/hosts.tmp" Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # Make a temporary file with the old hosts file's attributes. Feb 19 00:10:36 crc kubenswrapper[5108]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 19 00:10:36 crc kubenswrapper[5108]: echo "Failed to preserve hosts file. Exiting." Feb 19 00:10:36 crc kubenswrapper[5108]: exit 1 Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: while true; do Feb 19 00:10:36 crc kubenswrapper[5108]: declare -A svc_ips Feb 19 00:10:36 crc kubenswrapper[5108]: for svc in "${services[@]}"; do Feb 19 00:10:36 crc kubenswrapper[5108]: # Fetch service IP from cluster dns if present. We make several tries Feb 19 00:10:36 crc kubenswrapper[5108]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 19 00:10:36 crc kubenswrapper[5108]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 19 00:10:36 crc kubenswrapper[5108]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 19 00:10:36 crc kubenswrapper[5108]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:36 crc kubenswrapper[5108]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:36 crc kubenswrapper[5108]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:36 crc kubenswrapper[5108]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 19 00:10:36 crc kubenswrapper[5108]: for i in ${!cmds[*]} Feb 19 00:10:36 crc kubenswrapper[5108]: do Feb 19 00:10:36 crc kubenswrapper[5108]: ips=($(eval "${cmds[i]}")) Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: svc_ips["${svc}"]="${ips[@]}" Feb 19 00:10:36 crc kubenswrapper[5108]: break Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # Update /etc/hosts only if we get valid service IPs Feb 19 00:10:36 crc kubenswrapper[5108]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 19 00:10:36 crc kubenswrapper[5108]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 19 00:10:36 crc kubenswrapper[5108]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 19 00:10:36 crc kubenswrapper[5108]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 60 & wait Feb 19 00:10:36 crc kubenswrapper[5108]: continue Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # Append resolver entries for services Feb 19 00:10:36 crc kubenswrapper[5108]: rc=0 Feb 19 00:10:36 crc kubenswrapper[5108]: for svc in "${!svc_ips[@]}"; do Feb 19 00:10:36 crc kubenswrapper[5108]: for ip in ${svc_ips[${svc}]}; do Feb 19 00:10:36 crc kubenswrapper[5108]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ $rc -ne 0 ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 60 & wait Feb 19 00:10:36 crc kubenswrapper[5108]: continue Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 19 00:10:36 crc kubenswrapper[5108]: # Replace /etc/hosts with our modified version if needed Feb 19 00:10:36 crc kubenswrapper[5108]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 19 00:10:36 crc kubenswrapper[5108]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 60 & wait Feb 19 00:10:36 crc kubenswrapper[5108]: unset svc_ips Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klkhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-kb56v_openshift-dns(5da66974-30d5-4571-b5df-d264febc8a9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.225598 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.227390 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-kb56v" podUID="5da66974-30d5-4571-b5df-d264febc8a9b" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.237858 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vsm7k" event={"ID":"97553d38-332c-4cc9-8732-5363a62dde8c","Type":"ContainerStarted","Data":"8b184fac35141142ba8d628b6fe45dad1c7775e997b6e8bd59b72fadd35ec007"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.238858 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerStarted","Data":"fca5d3f3036ac6c156ce2d28dd16ec27b6634e7c71c1fd4e684c657cc5edc6c8"} Feb 19 00:10:36 crc kubenswrapper[5108]: W0219 00:10:36.239693 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc556da79_b025_425f_b2cd_ac55950c66cc.slice/crio-c98da5235f802bdfdfd12b902f8fa68fcd0688f85988187de3b6742eadfe9a59 WatchSource:0}: Error finding container c98da5235f802bdfdfd12b902f8fa68fcd0688f85988187de3b6742eadfe9a59: Status 404 returned error can't find the container with id c98da5235f802bdfdfd12b902f8fa68fcd0688f85988187de3b6742eadfe9a59 Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.240021 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"792f699fc83f9114a3f03cf9925f4e413188dd4d4b012ae2ccff261b26f826ad"} Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.241792 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:36 crc kubenswrapper[5108]: set -euo pipefail Feb 19 00:10:36 crc kubenswrapper[5108]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 19 00:10:36 crc kubenswrapper[5108]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 19 00:10:36 crc kubenswrapper[5108]: # As the secret mount is optional we must wait for the files to be present. Feb 19 00:10:36 crc kubenswrapper[5108]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 19 00:10:36 crc kubenswrapper[5108]: TS=$(date +%s) Feb 19 00:10:36 crc kubenswrapper[5108]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 19 00:10:36 crc kubenswrapper[5108]: HAS_LOGGED_INFO=0 Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: log_missing_certs(){ Feb 19 00:10:36 crc kubenswrapper[5108]: CUR_TS=$(date +%s) Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 19 00:10:36 crc kubenswrapper[5108]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 19 00:10:36 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 19 00:10:36 crc kubenswrapper[5108]: HAS_LOGGED_INFO=1 Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: } Feb 19 00:10:36 crc kubenswrapper[5108]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 19 00:10:36 crc kubenswrapper[5108]: log_missing_certs Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 5 Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 19 00:10:36 crc kubenswrapper[5108]: exec /usr/bin/kube-rbac-proxy \ Feb 19 00:10:36 crc kubenswrapper[5108]: --logtostderr \ Feb 19 00:10:36 crc kubenswrapper[5108]: --secure-listen-address=:9108 \ Feb 19 00:10:36 crc kubenswrapper[5108]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 19 00:10:36 crc kubenswrapper[5108]: --upstream=http://127.0.0.1:29108/ \ Feb 19 00:10:36 crc kubenswrapper[5108]: --tls-private-key-file=${TLS_PK} \ Feb 19 00:10:36 crc kubenswrapper[5108]: --tls-cert-file=${TLS_CERT} Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2dh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-bbrq4_openshift-ovn-kubernetes(c556da79-b025-425f-b2cd-ac55950c66cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.242966 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"ca1e7a54e78a4c168d5ac530eb081eb20ed267de5741fb818c05e06088dcc900"} Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.243955 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: set -o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: source "/env/_master" Feb 19 00:10:36 crc kubenswrapper[5108]: set +o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # This is needed so that converting clusters from GA to TP Feb 19 00:10:36 crc kubenswrapper[5108]: # will rollout control plane pods as well Feb 19 00:10:36 crc kubenswrapper[5108]: network_segmentation_enabled_flag= Feb 19 00:10:36 crc kubenswrapper[5108]: multi_network_enabled_flag= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "true" != "true" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: route_advertisements_enable_flag= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # Enable multi-network policy if configured (control-plane always full mode) Feb 19 00:10:36 crc kubenswrapper[5108]: multi_network_policy_enabled_flag= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # Enable admin network policy if configured (control-plane always full mode) Feb 19 00:10:36 crc kubenswrapper[5108]: admin_network_policy_enabled_flag= Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: if [ "shared" == "shared" ]; then Feb 19 00:10:36 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode shared" Feb 19 00:10:36 crc kubenswrapper[5108]: elif [ "shared" == "local" ]; then Feb 19 00:10:36 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode local" Feb 19 00:10:36 crc kubenswrapper[5108]: else Feb 19 00:10:36 crc kubenswrapper[5108]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 19 00:10:36 crc kubenswrapper[5108]: exit 1 Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 19 00:10:36 crc kubenswrapper[5108]: exec /usr/bin/ovnkube \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-interconnect \ Feb 19 00:10:36 crc kubenswrapper[5108]: --init-cluster-manager "${K8S_NODE}" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 19 00:10:36 crc kubenswrapper[5108]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --metrics-bind-address "127.0.0.1:29108" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --metrics-enable-pprof \ Feb 19 00:10:36 crc kubenswrapper[5108]: --metrics-enable-config-duration \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${ovn_v4_join_subnet_opt} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${ovn_v6_join_subnet_opt} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${dns_name_resolver_enabled_flag} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${persistent_ips_enabled_flag} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${multi_network_enabled_flag} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${network_segmentation_enabled_flag} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${gateway_mode_flags} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${route_advertisements_enable_flag} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${preconfigured_udn_addresses_enable_flag} \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-egress-ip=true \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-egress-firewall=true \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-egress-qos=true \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-egress-service=true \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-multicast \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-multi-external-gateway=true \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${multi_network_policy_enabled_flag} \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${admin_network_policy_enabled_flag} Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2dh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-bbrq4_openshift-ovn-kubernetes(c556da79-b025-425f-b2cd-ac55950c66cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.244255 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 19 00:10:36 crc kubenswrapper[5108]: while [ true ]; Feb 19 00:10:36 crc kubenswrapper[5108]: do Feb 19 00:10:36 crc kubenswrapper[5108]: for f in $(ls /tmp/serviceca); do Feb 19 00:10:36 crc kubenswrapper[5108]: echo $f Feb 19 00:10:36 crc kubenswrapper[5108]: ca_file_path="/tmp/serviceca/${f}" Feb 19 00:10:36 crc kubenswrapper[5108]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 19 00:10:36 crc kubenswrapper[5108]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 19 00:10:36 crc kubenswrapper[5108]: if [ -e "${reg_dir_path}" ]; then Feb 19 00:10:36 crc kubenswrapper[5108]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 19 00:10:36 crc kubenswrapper[5108]: else Feb 19 00:10:36 crc kubenswrapper[5108]: mkdir $reg_dir_path Feb 19 00:10:36 crc kubenswrapper[5108]: cp $ca_file_path $reg_dir_path/ca.crt Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: for d in $(ls /etc/docker/certs.d); do Feb 19 00:10:36 crc kubenswrapper[5108]: echo $d Feb 19 00:10:36 crc kubenswrapper[5108]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 19 00:10:36 crc kubenswrapper[5108]: reg_conf_path="/tmp/serviceca/${dp}" Feb 19 00:10:36 crc kubenswrapper[5108]: if [ ! -e "${reg_conf_path}" ]; then Feb 19 00:10:36 crc kubenswrapper[5108]: rm -rf /etc/docker/certs.d/$d Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 60 & wait ${!} Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcgpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-vsm7k_openshift-image-registry(97553d38-332c-4cc9-8732-5363a62dde8c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.244352 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kb56v" event={"ID":"5da66974-30d5-4571-b5df-d264febc8a9b","Type":"ContainerStarted","Data":"68f31f6cfa815246242bb612412956bd18c5337ba250d167af86b740869a57e7"} Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.245035 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.245329 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-vsm7k" podUID="97553d38-332c-4cc9-8732-5363a62dde8c" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.245609 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2lkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.245973 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"97ba31899a8bf93cbd0751220565b42a6c3d1a45ebe6f0d259c53cf41d8bb36f"} Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.246678 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:36 crc kubenswrapper[5108]: set -uo pipefail Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 19 00:10:36 crc kubenswrapper[5108]: HOSTS_FILE="/etc/hosts" Feb 19 00:10:36 crc kubenswrapper[5108]: TEMP_FILE="/tmp/hosts.tmp" Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # Make a temporary file with the old hosts file's attributes. Feb 19 00:10:36 crc kubenswrapper[5108]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 19 00:10:36 crc kubenswrapper[5108]: echo "Failed to preserve hosts file. Exiting." Feb 19 00:10:36 crc kubenswrapper[5108]: exit 1 Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: while true; do Feb 19 00:10:36 crc kubenswrapper[5108]: declare -A svc_ips Feb 19 00:10:36 crc kubenswrapper[5108]: for svc in "${services[@]}"; do Feb 19 00:10:36 crc kubenswrapper[5108]: # Fetch service IP from cluster dns if present. We make several tries Feb 19 00:10:36 crc kubenswrapper[5108]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 19 00:10:36 crc kubenswrapper[5108]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 19 00:10:36 crc kubenswrapper[5108]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 19 00:10:36 crc kubenswrapper[5108]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:36 crc kubenswrapper[5108]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:36 crc kubenswrapper[5108]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:36 crc kubenswrapper[5108]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 19 00:10:36 crc kubenswrapper[5108]: for i in ${!cmds[*]} Feb 19 00:10:36 crc kubenswrapper[5108]: do Feb 19 00:10:36 crc kubenswrapper[5108]: ips=($(eval "${cmds[i]}")) Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: svc_ips["${svc}"]="${ips[@]}" Feb 19 00:10:36 crc kubenswrapper[5108]: break Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # Update /etc/hosts only if we get valid service IPs Feb 19 00:10:36 crc kubenswrapper[5108]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 19 00:10:36 crc kubenswrapper[5108]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 19 00:10:36 crc kubenswrapper[5108]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 19 00:10:36 crc kubenswrapper[5108]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 60 & wait Feb 19 00:10:36 crc kubenswrapper[5108]: continue Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # Append resolver entries for services Feb 19 00:10:36 crc kubenswrapper[5108]: rc=0 Feb 19 00:10:36 crc kubenswrapper[5108]: for svc in "${!svc_ips[@]}"; do Feb 19 00:10:36 crc kubenswrapper[5108]: for ip in ${svc_ips[${svc}]}; do Feb 19 00:10:36 crc kubenswrapper[5108]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ $rc -ne 0 ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 60 & wait Feb 19 00:10:36 crc kubenswrapper[5108]: continue Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 19 00:10:36 crc kubenswrapper[5108]: # Replace /etc/hosts with our modified version if needed Feb 19 00:10:36 crc kubenswrapper[5108]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 19 00:10:36 crc kubenswrapper[5108]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: sleep 60 & wait Feb 19 00:10:36 crc kubenswrapper[5108]: unset svc_ips Feb 19 00:10:36 crc kubenswrapper[5108]: done Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klkhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-kb56v_openshift-dns(5da66974-30d5-4571-b5df-d264febc8a9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.246981 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.247012 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sf757,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gxmww_openshift-multus(ffe88610-b8e8-4a54-9e50-62ebbfd5d6db): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.247241 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b62e9d-81e7-4ca1-a374-a8e89f8afa24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.247695 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 19 00:10:36 crc kubenswrapper[5108]: apiVersion: v1 Feb 19 00:10:36 crc kubenswrapper[5108]: clusters: Feb 19 00:10:36 crc kubenswrapper[5108]: - cluster: Feb 19 00:10:36 crc kubenswrapper[5108]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 19 00:10:36 crc kubenswrapper[5108]: server: https://api-int.crc.testing:6443 Feb 19 00:10:36 crc kubenswrapper[5108]: name: default-cluster Feb 19 00:10:36 crc kubenswrapper[5108]: contexts: Feb 19 00:10:36 crc kubenswrapper[5108]: - context: Feb 19 00:10:36 crc kubenswrapper[5108]: cluster: default-cluster Feb 19 00:10:36 crc kubenswrapper[5108]: namespace: default Feb 19 00:10:36 crc kubenswrapper[5108]: user: default-auth Feb 19 00:10:36 crc kubenswrapper[5108]: name: default-context Feb 19 00:10:36 crc kubenswrapper[5108]: current-context: default-context Feb 19 00:10:36 crc kubenswrapper[5108]: kind: Config Feb 19 00:10:36 crc kubenswrapper[5108]: preferences: {} Feb 19 00:10:36 crc kubenswrapper[5108]: users: Feb 19 00:10:36 crc kubenswrapper[5108]: - name: default-auth Feb 19 00:10:36 crc kubenswrapper[5108]: user: Feb 19 00:10:36 crc kubenswrapper[5108]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:10:36 crc kubenswrapper[5108]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:10:36 crc kubenswrapper[5108]: EOF Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cdz5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-vk6d6_openshift-ovn-kubernetes(7f4459ce-0bd5-493a-813f-977d6e26f440): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.247728 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-kb56v" podUID="5da66974-30d5-4571-b5df-d264febc8a9b" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.248046 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.248078 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gxmww" podUID="ffe88610-b8e8-4a54-9e50-62ebbfd5d6db" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.248109 5108 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2lkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.248738 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v42mj" event={"ID":"c8ba935e-bb01-466a-8b94-8b0c15e535b1","Type":"ContainerStarted","Data":"4fff08eff2406da2c14cbcc8b5a5d5b4f840f1d31c0ec2beb9c3dca4e17b30f1"} Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.249580 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.249637 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.250409 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 19 00:10:36 crc kubenswrapper[5108]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 19 00:10:36 crc kubenswrapper[5108]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsbs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-v42mj_openshift-multus(c8ba935e-bb01-466a-8b94-8b0c15e535b1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.250665 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"c3b32536ddb0b0474c656806fd0bef3a695c29b55dc9f16418809f76b9d9562c"} Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.251555 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-v42mj" podUID="c8ba935e-bb01-466a-8b94-8b0c15e535b1" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.251872 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: set -o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: source "/env/_master" Feb 19 00:10:36 crc kubenswrapper[5108]: set +o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 19 00:10:36 crc kubenswrapper[5108]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 19 00:10:36 crc kubenswrapper[5108]: ho_enable="--enable-hybrid-overlay" Feb 19 00:10:36 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 19 00:10:36 crc kubenswrapper[5108]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 19 00:10:36 crc kubenswrapper[5108]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 19 00:10:36 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:10:36 crc kubenswrapper[5108]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --webhook-host=127.0.0.1 \ Feb 19 00:10:36 crc kubenswrapper[5108]: --webhook-port=9743 \ Feb 19 00:10:36 crc kubenswrapper[5108]: ${ho_enable} \ Feb 19 00:10:36 crc kubenswrapper[5108]: --enable-interconnect \ Feb 19 00:10:36 crc kubenswrapper[5108]: --disable-approver \ Feb 19 00:10:36 crc kubenswrapper[5108]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --wait-for-kubernetes-api=200s \ Feb 19 00:10:36 crc kubenswrapper[5108]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.251974 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"822ad1e4ac7fa71c14ff2d3148ec3b68f7ce7319073813af2489c538ba721c1f"} Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.253552 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:36 crc kubenswrapper[5108]: set -o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: source /etc/kubernetes/apiserver-url.env Feb 19 00:10:36 crc kubenswrapper[5108]: else Feb 19 00:10:36 crc kubenswrapper[5108]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 19 00:10:36 crc kubenswrapper[5108]: exit 1 Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.253985 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:36 crc kubenswrapper[5108]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:36 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:36 crc kubenswrapper[5108]: set -o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: source "/env/_master" Feb 19 00:10:36 crc kubenswrapper[5108]: set +o allexport Feb 19 00:10:36 crc kubenswrapper[5108]: fi Feb 19 00:10:36 crc kubenswrapper[5108]: Feb 19 00:10:36 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 19 00:10:36 crc kubenswrapper[5108]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:10:36 crc kubenswrapper[5108]: --disable-webhook \ Feb 19 00:10:36 crc kubenswrapper[5108]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 19 00:10:36 crc kubenswrapper[5108]: --loglevel="${LOGLEVEL}" Feb 19 00:10:36 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:36 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.254873 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.255498 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.265775 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.280663 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.292029 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.301833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.302143 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.302254 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.302342 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.302351 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.302419 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.313885 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gxmww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.325766 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d740232-965c-462f-99ca-35945243e20c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:14Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190334 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190382 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190463 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771459813\\\\\\\\\\\\\\\" (2026-02-19 00:10:12 +0000 UTC to 2026-02-19 00:10:13 +0000 UTC (now=2026-02-19 00:10:14.190418013 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190344 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI0219 00:10:14.190497 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\"\\\\nI0219 00:10:14.190355 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190610 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771459814\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771459813\\\\\\\\\\\\\\\" (2026-02-18 23:10:13 +0000 UTC to 2029-02-18 23:10:13 +0000 UTC (now=2026-02-19 00:10:14.190596988 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190625 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0219 00:10:14.190637 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 00:10:14.190646 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF0219 00:10:14.191152 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.337418 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5abda7b-3e5f-4e9d-af16-8bbc3c1086b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.349765 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.359853 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.390310 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsm7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97553d38-332c-4cc9-8732-5363a62dde8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcgpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsm7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.406089 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.406151 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.406167 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.406189 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.406203 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.443694 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e615cef0-ed9e-4605-b2dc-a11e69dec261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.446026 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.446200 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:37.446177781 +0000 UTC m=+96.412824099 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.446299 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.446344 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.446496 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.446563 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:37.446552141 +0000 UTC m=+96.413198459 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.446749 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.447005 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:37.446979942 +0000 UTC m=+96.413626290 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.472238 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.509613 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.510029 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.510324 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.510573 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.510766 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.512433 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb56v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da66974-30d5-4571-b5df-d264febc8a9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klkhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb56v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.547851 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.548101 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.548370 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.548591 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs podName:766a3580-a7a9-49f7-8948-2d949558d2d2 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:37.548546847 +0000 UTC m=+96.515193205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs") pod "network-metrics-daemon-2clv5" (UID: "766a3580-a7a9-49f7-8948-2d949558d2d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.548764 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.548982 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.549158 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.549310 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.548983 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.549640 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.549666 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.549860 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:37.549568914 +0000 UTC m=+96.516215262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:36 crc kubenswrapper[5108]: E0219 00:10:36.550137 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:37.550109189 +0000 UTC m=+96.516755527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.553351 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4bc85dd-5697-4b34-acbf-7a4d2b05525a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://912093548ad39f1b40ede6b3bc22fadc53b777d2469d2d448e0f027afe0265a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://71165a4e8b9865539510ee574a6e4c02ad7804a7183a1f0362c018cb5dc60a18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99dcdf8bfe6fbb0165aa178f2a1df3a6066225eba167dfca3dc1a45802496c1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.591971 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.613115 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.613161 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.613171 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.613186 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.613196 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.632735 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v42mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ba935e-bb01-466a-8b94-8b0c15e535b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nsbs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v42mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.683186 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f4459ce-0bd5-493a-813f-977d6e26f440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vk6d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.712185 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c556da79-b025-425f-b2cd-ac55950c66cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bbrq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.716003 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.716057 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.716075 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.716101 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.716119 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.755104 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.792306 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb56v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da66974-30d5-4571-b5df-d264febc8a9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klkhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb56v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.818722 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.818776 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.818786 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.818801 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.818810 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.832349 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4bc85dd-5697-4b34-acbf-7a4d2b05525a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://912093548ad39f1b40ede6b3bc22fadc53b777d2469d2d448e0f027afe0265a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://71165a4e8b9865539510ee574a6e4c02ad7804a7183a1f0362c018cb5dc60a18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99dcdf8bfe6fbb0165aa178f2a1df3a6066225eba167dfca3dc1a45802496c1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.874728 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.911190 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v42mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ba935e-bb01-466a-8b94-8b0c15e535b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nsbs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v42mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.920723 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.920768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.920781 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.920840 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.920855 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:36Z","lastTransitionTime":"2026-02-19T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.955093 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f4459ce-0bd5-493a-813f-977d6e26f440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vk6d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[5108]: I0219 00:10:36.997435 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c556da79-b025-425f-b2cd-ac55950c66cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bbrq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.023487 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.023535 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.023548 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.023565 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.023576 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.031388 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b62e9d-81e7-4ca1-a374-a8e89f8afa24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.074374 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.115327 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.126341 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.126423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.126441 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.126471 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.126495 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.165653 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.198050 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.228596 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.228674 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.228686 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.228704 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.228716 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.234865 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gxmww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.255842 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" event={"ID":"c556da79-b025-425f-b2cd-ac55950c66cc","Type":"ContainerStarted","Data":"c98da5235f802bdfdfd12b902f8fa68fcd0688f85988187de3b6742eadfe9a59"} Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.258263 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:37 crc kubenswrapper[5108]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:37 crc kubenswrapper[5108]: set -euo pipefail Feb 19 00:10:37 crc kubenswrapper[5108]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 19 00:10:37 crc kubenswrapper[5108]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 19 00:10:37 crc kubenswrapper[5108]: # As the secret mount is optional we must wait for the files to be present. Feb 19 00:10:37 crc kubenswrapper[5108]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 19 00:10:37 crc kubenswrapper[5108]: TS=$(date +%s) Feb 19 00:10:37 crc kubenswrapper[5108]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 19 00:10:37 crc kubenswrapper[5108]: HAS_LOGGED_INFO=0 Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: log_missing_certs(){ Feb 19 00:10:37 crc kubenswrapper[5108]: CUR_TS=$(date +%s) Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 19 00:10:37 crc kubenswrapper[5108]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 19 00:10:37 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 19 00:10:37 crc kubenswrapper[5108]: HAS_LOGGED_INFO=1 Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: } Feb 19 00:10:37 crc kubenswrapper[5108]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 19 00:10:37 crc kubenswrapper[5108]: log_missing_certs Feb 19 00:10:37 crc kubenswrapper[5108]: sleep 5 Feb 19 00:10:37 crc kubenswrapper[5108]: done Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 19 00:10:37 crc kubenswrapper[5108]: exec /usr/bin/kube-rbac-proxy \ Feb 19 00:10:37 crc kubenswrapper[5108]: --logtostderr \ Feb 19 00:10:37 crc kubenswrapper[5108]: --secure-listen-address=:9108 \ Feb 19 00:10:37 crc kubenswrapper[5108]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 19 00:10:37 crc kubenswrapper[5108]: --upstream=http://127.0.0.1:29108/ \ Feb 19 00:10:37 crc kubenswrapper[5108]: --tls-private-key-file=${TLS_PK} \ Feb 19 00:10:37 crc kubenswrapper[5108]: --tls-cert-file=${TLS_CERT} Feb 19 00:10:37 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2dh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-bbrq4_openshift-ovn-kubernetes(c556da79-b025-425f-b2cd-ac55950c66cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:37 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.261149 5108 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:37 crc kubenswrapper[5108]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: set -o allexport Feb 19 00:10:37 crc kubenswrapper[5108]: source "/env/_master" Feb 19 00:10:37 crc kubenswrapper[5108]: set +o allexport Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "" != "" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: # This is needed so that converting clusters from GA to TP Feb 19 00:10:37 crc kubenswrapper[5108]: # will rollout control plane pods as well Feb 19 00:10:37 crc kubenswrapper[5108]: network_segmentation_enabled_flag= Feb 19 00:10:37 crc kubenswrapper[5108]: multi_network_enabled_flag= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "true" != "true" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: route_advertisements_enable_flag= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: # Enable multi-network policy if configured (control-plane always full mode) Feb 19 00:10:37 crc kubenswrapper[5108]: multi_network_policy_enabled_flag= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "false" == "true" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: # Enable admin network policy if configured (control-plane always full mode) Feb 19 00:10:37 crc kubenswrapper[5108]: admin_network_policy_enabled_flag= Feb 19 00:10:37 crc kubenswrapper[5108]: if [[ "true" == "true" ]]; then Feb 19 00:10:37 crc kubenswrapper[5108]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: if [ "shared" == "shared" ]; then Feb 19 00:10:37 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode shared" Feb 19 00:10:37 crc kubenswrapper[5108]: elif [ "shared" == "local" ]; then Feb 19 00:10:37 crc kubenswrapper[5108]: gateway_mode_flags="--gateway-mode local" Feb 19 00:10:37 crc kubenswrapper[5108]: else Feb 19 00:10:37 crc kubenswrapper[5108]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 19 00:10:37 crc kubenswrapper[5108]: exit 1 Feb 19 00:10:37 crc kubenswrapper[5108]: fi Feb 19 00:10:37 crc kubenswrapper[5108]: Feb 19 00:10:37 crc kubenswrapper[5108]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 19 00:10:37 crc kubenswrapper[5108]: exec /usr/bin/ovnkube \ Feb 19 00:10:37 crc kubenswrapper[5108]: --enable-interconnect \ Feb 19 00:10:37 crc kubenswrapper[5108]: --init-cluster-manager "${K8S_NODE}" \ Feb 19 00:10:37 crc kubenswrapper[5108]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 19 00:10:37 crc kubenswrapper[5108]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 19 00:10:37 crc kubenswrapper[5108]: --metrics-bind-address "127.0.0.1:29108" \ Feb 19 00:10:37 crc kubenswrapper[5108]: --metrics-enable-pprof \ Feb 19 00:10:37 crc kubenswrapper[5108]: --metrics-enable-config-duration \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${ovn_v4_join_subnet_opt} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${ovn_v6_join_subnet_opt} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${dns_name_resolver_enabled_flag} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${persistent_ips_enabled_flag} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${multi_network_enabled_flag} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${network_segmentation_enabled_flag} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${gateway_mode_flags} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${route_advertisements_enable_flag} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${preconfigured_udn_addresses_enable_flag} \ Feb 19 00:10:37 crc kubenswrapper[5108]: --enable-egress-ip=true \ Feb 19 00:10:37 crc kubenswrapper[5108]: --enable-egress-firewall=true \ Feb 19 00:10:37 crc kubenswrapper[5108]: --enable-egress-qos=true \ Feb 19 00:10:37 crc kubenswrapper[5108]: --enable-egress-service=true \ Feb 19 00:10:37 crc kubenswrapper[5108]: --enable-multicast \ Feb 19 00:10:37 crc kubenswrapper[5108]: --enable-multi-external-gateway=true \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${multi_network_policy_enabled_flag} \ Feb 19 00:10:37 crc kubenswrapper[5108]: ${admin_network_policy_enabled_flag} Feb 19 00:10:37 crc kubenswrapper[5108]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2dh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-bbrq4_openshift-ovn-kubernetes(c556da79-b025-425f-b2cd-ac55950c66cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:37 crc kubenswrapper[5108]: > logger="UnhandledError" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.262392 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.277507 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d740232-965c-462f-99ca-35945243e20c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:14Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190334 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190382 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190463 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771459813\\\\\\\\\\\\\\\" (2026-02-19 00:10:12 +0000 UTC to 2026-02-19 00:10:13 +0000 UTC (now=2026-02-19 00:10:14.190418013 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190344 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI0219 00:10:14.190497 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\"\\\\nI0219 00:10:14.190355 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190610 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771459814\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771459813\\\\\\\\\\\\\\\" (2026-02-18 23:10:13 +0000 UTC to 2029-02-18 23:10:13 +0000 UTC (now=2026-02-19 00:10:14.190596988 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190625 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0219 00:10:14.190637 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 00:10:14.190646 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF0219 00:10:14.191152 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.316219 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5abda7b-3e5f-4e9d-af16-8bbc3c1086b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.325163 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.331135 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.331219 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.331242 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.331272 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.331295 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.377847 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.413668 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.433732 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.433796 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.433807 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.433824 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.433835 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.453307 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsm7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97553d38-332c-4cc9-8732-5363a62dde8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcgpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsm7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.458589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.458783 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.458803 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:39.458774892 +0000 UTC m=+98.425421200 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.458914 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.459020 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:39.458998198 +0000 UTC m=+98.425644526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.459070 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.459135 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.459177 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:39.459166682 +0000 UTC m=+98.425813000 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.513424 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e615cef0-ed9e-4605-b2dc-a11e69dec261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.536380 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.536462 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.536516 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.536543 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.536595 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.551733 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e615cef0-ed9e-4605-b2dc-a11e69dec261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.560356 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.560480 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.560572 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.560740 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.560784 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.560785 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.560817 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.560838 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.560848 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.560857 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.561543 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs podName:766a3580-a7a9-49f7-8948-2d949558d2d2 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:39.56085832 +0000 UTC m=+98.527504658 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs") pod "network-metrics-daemon-2clv5" (UID: "766a3580-a7a9-49f7-8948-2d949558d2d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.561638 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:39.56161513 +0000 UTC m=+98.528261518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.561716 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:39.561702262 +0000 UTC m=+98.528348610 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.574707 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.611835 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb56v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da66974-30d5-4571-b5df-d264febc8a9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klkhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb56v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.639774 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.639855 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.639879 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.639908 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.639973 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.657027 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4bc85dd-5697-4b34-acbf-7a4d2b05525a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://912093548ad39f1b40ede6b3bc22fadc53b777d2469d2d448e0f027afe0265a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://71165a4e8b9865539510ee574a6e4c02ad7804a7183a1f0362c018cb5dc60a18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99dcdf8bfe6fbb0165aa178f2a1df3a6066225eba167dfca3dc1a45802496c1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.693883 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.736625 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v42mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ba935e-bb01-466a-8b94-8b0c15e535b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nsbs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v42mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.742993 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.743065 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.743091 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.743124 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.743150 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.786151 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f4459ce-0bd5-493a-813f-977d6e26f440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vk6d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.812543 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c556da79-b025-425f-b2cd-ac55950c66cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bbrq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.845811 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.845886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.845906 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.845964 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.845987 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.847426 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.847604 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.847624 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.847920 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.847902 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.847910 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.848116 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:37 crc kubenswrapper[5108]: E0219 00:10:37.848397 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.854412 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.854914 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b62e9d-81e7-4ca1-a374-a8e89f8afa24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.855760 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.858560 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.860695 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.864525 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.867653 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.869033 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.870708 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.871362 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.873036 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.874448 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.876417 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.877103 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.880147 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.880604 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.881755 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.883446 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.884846 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.886734 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.888578 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.889950 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.893559 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.895267 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.897602 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.897775 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.900087 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.901161 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.903391 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.904241 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.908821 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.913361 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.916999 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.919824 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.922674 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.925110 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.926594 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.927877 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.929181 5108 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.929359 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.934320 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.934642 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.936672 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.939038 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.940364 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.941620 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.943093 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.944923 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.946223 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.948457 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.949034 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.949166 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.949199 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.949241 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.949272 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:37Z","lastTransitionTime":"2026-02-19T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.952132 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.954764 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.957204 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.958975 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.961276 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.962771 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.965317 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.968277 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.970105 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.971309 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.973200 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 19 00:10:37 crc kubenswrapper[5108]: I0219 00:10:37.973663 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.015354 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.052628 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.052686 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.052704 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.052729 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.052748 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.059858 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gxmww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.098666 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d740232-965c-462f-99ca-35945243e20c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:14Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190334 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190382 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190463 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771459813\\\\\\\\\\\\\\\" (2026-02-19 00:10:12 +0000 UTC to 2026-02-19 00:10:13 +0000 UTC (now=2026-02-19 00:10:14.190418013 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190344 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI0219 00:10:14.190497 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\"\\\\nI0219 00:10:14.190355 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190610 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771459814\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771459813\\\\\\\\\\\\\\\" (2026-02-18 23:10:13 +0000 UTC to 2029-02-18 23:10:13 +0000 UTC (now=2026-02-19 00:10:14.190596988 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190625 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0219 00:10:14.190637 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 00:10:14.190646 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF0219 00:10:14.191152 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.134869 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5abda7b-3e5f-4e9d-af16-8bbc3c1086b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.155385 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.155450 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.155463 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.155485 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.155497 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.175862 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.213120 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.254019 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsm7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97553d38-332c-4cc9-8732-5363a62dde8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcgpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsm7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.258156 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.258202 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.258213 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.258236 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.258247 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.361810 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.361882 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.361892 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.361914 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.361924 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.464126 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.464521 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.464661 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.464805 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.465066 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.507922 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.568192 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.568325 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.568352 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.568386 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.568412 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.671468 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.671546 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.671572 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.671602 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.671626 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.774522 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.774588 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.774599 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.774617 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.774629 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.877743 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.877813 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.877827 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.877850 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.877867 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.980984 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.981067 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.981087 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.981114 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:38 crc kubenswrapper[5108]: I0219 00:10:38.981136 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:38Z","lastTransitionTime":"2026-02-19T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.083585 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.083676 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.083689 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.083731 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.083742 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.186374 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.186439 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.186453 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.186473 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.186488 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.289009 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.289093 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.289112 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.289141 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.289162 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.391901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.392030 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.392057 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.392087 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.392122 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.483190 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.483434 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.483470 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:43.483420848 +0000 UTC m=+102.450067196 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.483549 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.483672 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.483694 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:43.483665315 +0000 UTC m=+102.450311663 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.483988 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.484125 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:43.484098826 +0000 UTC m=+102.450745174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.495249 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.495322 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.495346 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.495410 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.495433 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.584718 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.584814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.584885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.584926 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585017 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585036 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585091 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585144 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:43.585120526 +0000 UTC m=+102.551766864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585172 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs podName:766a3580-a7a9-49f7-8948-2d949558d2d2 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:43.585158947 +0000 UTC m=+102.551805285 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs") pod "network-metrics-daemon-2clv5" (UID: "766a3580-a7a9-49f7-8948-2d949558d2d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585183 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585245 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585265 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.585377 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:43.585349383 +0000 UTC m=+102.551995721 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.599410 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.599473 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.599488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.599511 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.599527 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.701898 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.702030 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.702054 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.702084 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.702111 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.805571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.805713 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.805729 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.805757 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.805776 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.847653 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.847726 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.847886 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.847985 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.848202 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.848332 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.848336 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:39 crc kubenswrapper[5108]: E0219 00:10:39.849350 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.908550 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.908665 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.908694 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.908729 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5108]: I0219 00:10:39.908758 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.011380 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.011493 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.011521 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.011555 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.011582 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.114956 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.115003 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.115017 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.115032 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.115046 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.218008 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.218102 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.218128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.218171 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.218213 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.322097 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.322179 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.322206 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.322238 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.322262 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.426167 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.426269 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.426295 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.426323 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.426342 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.501497 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.501580 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.501600 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.501627 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.501645 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: E0219 00:10:40.520392 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.525910 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.525994 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.526007 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.526031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.526048 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: E0219 00:10:40.540965 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.545204 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.545280 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.545299 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.545327 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.545348 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: E0219 00:10:40.561332 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.566426 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.566471 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.566483 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.566501 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.566513 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: E0219 00:10:40.581443 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.587311 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.587362 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.587375 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.587395 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.587408 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: E0219 00:10:40.598604 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"352aa3ad-02f7-4441-9880-46137003ff3d\\\",\\\"systemUUID\\\":\\\"d735bf3f-8433-4393-ae09-99790265e39c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:40 crc kubenswrapper[5108]: E0219 00:10:40.598851 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.600411 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.600462 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.600473 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.600491 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.600505 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.702676 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.702732 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.702753 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.702780 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.702797 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.805560 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.805644 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.805669 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.805701 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.805723 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.908726 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.908815 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.908833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.908859 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:40 crc kubenswrapper[5108]: I0219 00:10:40.908884 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:40Z","lastTransitionTime":"2026-02-19T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.011371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.011431 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.011445 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.011468 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.011479 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.114408 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.114466 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.114477 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.114496 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.114530 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.216979 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.217046 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.217062 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.217107 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.217121 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.319964 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.320029 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.320043 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.320064 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.320080 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.422879 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.422993 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.423009 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.423028 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.423041 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.526142 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.526230 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.526255 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.526287 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.526310 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.629122 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.629195 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.629214 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.629241 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.629264 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.732017 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.732110 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.732131 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.732162 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.732182 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.834885 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.835000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.835020 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.835049 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.835069 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.847450 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.847528 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.847486 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.847681 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:41 crc kubenswrapper[5108]: E0219 00:10:41.847707 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:41 crc kubenswrapper[5108]: E0219 00:10:41.847857 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:41 crc kubenswrapper[5108]: E0219 00:10:41.848058 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:41 crc kubenswrapper[5108]: E0219 00:10:41.848161 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.859181 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c556da79-b025-425f-b2cd-ac55950c66cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bbrq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.868504 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b62e9d-81e7-4ca1-a374-a8e89f8afa24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.886863 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.900919 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.921402 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.937572 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.937833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.937880 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.937893 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.937911 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.937924 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:41Z","lastTransitionTime":"2026-02-19T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.956526 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gxmww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.976474 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d740232-965c-462f-99ca-35945243e20c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:14Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190334 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190382 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190463 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771459813\\\\\\\\\\\\\\\" (2026-02-19 00:10:12 +0000 UTC to 2026-02-19 00:10:13 +0000 UTC (now=2026-02-19 00:10:14.190418013 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190344 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI0219 00:10:14.190497 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\"\\\\nI0219 00:10:14.190355 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190610 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771459814\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771459813\\\\\\\\\\\\\\\" (2026-02-18 23:10:13 +0000 UTC to 2029-02-18 23:10:13 +0000 UTC (now=2026-02-19 00:10:14.190596988 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190625 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0219 00:10:14.190637 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 00:10:14.190646 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF0219 00:10:14.191152 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:41 crc kubenswrapper[5108]: I0219 00:10:41.991157 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5abda7b-3e5f-4e9d-af16-8bbc3c1086b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.007401 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.020501 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.032708 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsm7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97553d38-332c-4cc9-8732-5363a62dde8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcgpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsm7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.040258 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.040331 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.040344 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.040361 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.040377 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.062850 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e615cef0-ed9e-4605-b2dc-a11e69dec261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.076560 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.088841 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb56v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da66974-30d5-4571-b5df-d264febc8a9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klkhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb56v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.104745 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4bc85dd-5697-4b34-acbf-7a4d2b05525a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://912093548ad39f1b40ede6b3bc22fadc53b777d2469d2d448e0f027afe0265a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://71165a4e8b9865539510ee574a6e4c02ad7804a7183a1f0362c018cb5dc60a18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99dcdf8bfe6fbb0165aa178f2a1df3a6066225eba167dfca3dc1a45802496c1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.118043 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.136376 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v42mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ba935e-bb01-466a-8b94-8b0c15e535b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nsbs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v42mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.143694 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.143774 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.143797 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.143825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.143845 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.165019 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f4459ce-0bd5-493a-813f-977d6e26f440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vk6d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.246439 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.246524 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.246559 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.246583 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.246598 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.349212 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.349271 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.349283 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.349302 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.349314 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.452289 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.452371 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.452384 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.452404 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.452414 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.555150 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.555230 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.555251 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.555278 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.555298 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.659518 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.659599 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.659618 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.659648 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.659670 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.762922 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.763050 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.763072 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.763103 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.763127 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.866211 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.866296 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.866315 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.866342 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.866362 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.969164 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.969238 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.969256 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.969282 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:42 crc kubenswrapper[5108]: I0219 00:10:42.969298 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:42Z","lastTransitionTime":"2026-02-19T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.073391 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.073497 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.073520 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.073576 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.073596 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.176791 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.176849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.176863 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.176885 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.176898 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.279479 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.279559 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.279576 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.279602 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.279619 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.383035 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.383118 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.383142 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.383209 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.383246 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.486118 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.486174 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.486188 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.486207 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.486219 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.541244 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.541425 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.541399193 +0000 UTC m=+110.508045501 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.541499 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.541542 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.541683 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.541690 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.541736 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.541729741 +0000 UTC m=+110.508376049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.541777 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.541751752 +0000 UTC m=+110.508398100 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.589478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.589552 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.589573 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.589599 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.589618 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.642751 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.642872 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643049 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.643165 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643226 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs podName:766a3580-a7a9-49f7-8948-2d949558d2d2 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.643148622 +0000 UTC m=+110.609794970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs") pod "network-metrics-daemon-2clv5" (UID: "766a3580-a7a9-49f7-8948-2d949558d2d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643394 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643433 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643305 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643455 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643489 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643516 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643614 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.643567513 +0000 UTC m=+110.610213861 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.643927 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.643890952 +0000 UTC m=+110.610537430 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.692022 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.692134 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.692163 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.692196 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.692225 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.795383 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.799177 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.799221 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.799255 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.799279 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.847680 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.847743 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.847766 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.847913 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.848160 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.848275 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.848399 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:43 crc kubenswrapper[5108]: E0219 00:10:43.848597 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.901360 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.901402 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.901415 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.901431 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:43 crc kubenswrapper[5108]: I0219 00:10:43.901447 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:43Z","lastTransitionTime":"2026-02-19T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.003520 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.003598 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.003612 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.003629 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.003641 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.106807 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.106865 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.106882 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.106905 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.106925 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.209076 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.209133 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.209146 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.209163 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.209176 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.311730 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.311804 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.311825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.311851 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.311871 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.414394 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.414462 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.414482 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.414508 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.414526 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.516391 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.516441 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.516454 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.516472 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.516485 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.618808 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.618854 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.618863 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.618877 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.618887 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.722118 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.722190 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.722212 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.722242 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.722264 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.824879 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.824973 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.824988 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.825015 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.825034 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.927364 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.927457 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.927494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.927525 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:44 crc kubenswrapper[5108]: I0219 00:10:44.927548 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:44Z","lastTransitionTime":"2026-02-19T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.029291 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.029354 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.029369 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.029389 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.029400 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.131987 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.132055 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.132075 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.132098 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.132115 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.235063 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.235157 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.235186 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.235215 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.235236 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.337632 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.337698 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.337712 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.337732 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.337783 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.441218 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.441308 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.441329 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.441357 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.441379 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.543915 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.544061 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.544081 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.544108 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.544127 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.646759 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.646820 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.646873 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.646904 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.646921 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.750399 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.750490 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.750509 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.750536 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.750551 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.847671 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.847671 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:45 crc kubenswrapper[5108]: E0219 00:10:45.847863 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.847917 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:45 crc kubenswrapper[5108]: E0219 00:10:45.848044 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:45 crc kubenswrapper[5108]: E0219 00:10:45.848219 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.848309 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:45 crc kubenswrapper[5108]: E0219 00:10:45.848418 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.852430 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.852488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.852501 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.852517 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.852529 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.956106 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.956188 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.956215 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.956247 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:45 crc kubenswrapper[5108]: I0219 00:10:45.956270 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:45Z","lastTransitionTime":"2026-02-19T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.059590 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.059687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.059706 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.059774 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.060370 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.163211 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.163255 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.163266 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.163282 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.163294 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.265706 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.265777 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.265800 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.265829 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.265851 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.368420 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.368496 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.368516 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.368541 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.368559 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.471842 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.471920 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.472012 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.472049 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.472073 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.574231 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.574335 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.574363 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.574397 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.574423 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.676745 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.677140 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.677309 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.677487 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.677644 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.779802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.779850 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.779863 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.779880 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.779891 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.883913 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.884041 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.884054 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.884073 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.884084 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.987246 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.987300 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.987309 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.987321 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:46 crc kubenswrapper[5108]: I0219 00:10:46.987329 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:46Z","lastTransitionTime":"2026-02-19T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.089203 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.089581 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.089605 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.089629 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.089650 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.192586 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.192690 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.192761 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.192837 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.192867 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.287443 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vsm7k" event={"ID":"97553d38-332c-4cc9-8732-5363a62dde8c","Type":"ContainerStarted","Data":"11db843111137af53b4e5d1f38695687d245abfdef88f5ada1f996625930140c"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.295537 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.295590 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.295610 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.295632 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.295650 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.304970 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.317293 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.335335 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gxmww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.354205 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d740232-965c-462f-99ca-35945243e20c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:14Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190334 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190382 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190463 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771459813\\\\\\\\\\\\\\\" (2026-02-19 00:10:12 +0000 UTC to 2026-02-19 00:10:13 +0000 UTC (now=2026-02-19 00:10:14.190418013 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190344 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI0219 00:10:14.190497 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\"\\\\nI0219 00:10:14.190355 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190610 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771459814\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771459813\\\\\\\\\\\\\\\" (2026-02-18 23:10:13 +0000 UTC to 2029-02-18 23:10:13 +0000 UTC (now=2026-02-19 00:10:14.190596988 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190625 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0219 00:10:14.190637 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 00:10:14.190646 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF0219 00:10:14.191152 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.369397 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5abda7b-3e5f-4e9d-af16-8bbc3c1086b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.380050 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.392446 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.398071 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.398137 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.398157 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.398183 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.398202 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.407029 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsm7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97553d38-332c-4cc9-8732-5363a62dde8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://11db843111137af53b4e5d1f38695687d245abfdef88f5ada1f996625930140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:10:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcgpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsm7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.442763 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e615cef0-ed9e-4605-b2dc-a11e69dec261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.458187 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.468720 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb56v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da66974-30d5-4571-b5df-d264febc8a9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klkhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb56v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.482047 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4bc85dd-5697-4b34-acbf-7a4d2b05525a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://912093548ad39f1b40ede6b3bc22fadc53b777d2469d2d448e0f027afe0265a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://71165a4e8b9865539510ee574a6e4c02ad7804a7183a1f0362c018cb5dc60a18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99dcdf8bfe6fbb0165aa178f2a1df3a6066225eba167dfca3dc1a45802496c1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.497257 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.501615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.501702 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.501728 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.501760 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.501784 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.510974 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v42mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ba935e-bb01-466a-8b94-8b0c15e535b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nsbs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v42mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.536259 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f4459ce-0bd5-493a-813f-977d6e26f440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vk6d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.546024 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c556da79-b025-425f-b2cd-ac55950c66cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bbrq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.558198 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b62e9d-81e7-4ca1-a374-a8e89f8afa24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.575574 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.591572 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.604770 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.604837 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.604907 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.604957 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.604976 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.707140 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.707188 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.707197 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.707213 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.707222 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.809901 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.809987 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.809999 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.810018 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.810031 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.847822 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:47 crc kubenswrapper[5108]: E0219 00:10:47.848116 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.848380 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.848637 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:47 crc kubenswrapper[5108]: E0219 00:10:47.848662 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:47 crc kubenswrapper[5108]: E0219 00:10:47.848755 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.848763 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:47 crc kubenswrapper[5108]: E0219 00:10:47.848858 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.912907 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.913015 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.913044 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.913076 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5108]: I0219 00:10:47.913099 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.015766 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.015813 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.015825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.015841 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.015853 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.118740 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.118792 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.118805 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.118823 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.118835 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.220892 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.220969 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.220981 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.220999 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.221045 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.292733 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" event={"ID":"c556da79-b025-425f-b2cd-ac55950c66cc","Type":"ContainerStarted","Data":"d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.292793 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" event={"ID":"c556da79-b025-425f-b2cd-ac55950c66cc","Type":"ContainerStarted","Data":"752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.305855 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsm7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97553d38-332c-4cc9-8732-5363a62dde8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://11db843111137af53b4e5d1f38695687d245abfdef88f5ada1f996625930140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:10:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcgpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsm7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.323835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.323895 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.323908 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.323928 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.323961 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.329190 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e615cef0-ed9e-4605-b2dc-a11e69dec261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.347179 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.360394 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb56v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da66974-30d5-4571-b5df-d264febc8a9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klkhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb56v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.378325 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4bc85dd-5697-4b34-acbf-7a4d2b05525a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://912093548ad39f1b40ede6b3bc22fadc53b777d2469d2d448e0f027afe0265a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://71165a4e8b9865539510ee574a6e4c02ad7804a7183a1f0362c018cb5dc60a18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99dcdf8bfe6fbb0165aa178f2a1df3a6066225eba167dfca3dc1a45802496c1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.393916 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.411496 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v42mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ba935e-bb01-466a-8b94-8b0c15e535b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nsbs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v42mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.426480 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.426552 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.426572 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.426596 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.426612 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.438282 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f4459ce-0bd5-493a-813f-977d6e26f440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vk6d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.451428 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c556da79-b025-425f-b2cd-ac55950c66cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bbrq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.462037 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b62e9d-81e7-4ca1-a374-a8e89f8afa24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.477586 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.490002 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.500198 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.511496 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.522829 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gxmww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.528197 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.528510 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.528682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.528816 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.528949 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.537609 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d740232-965c-462f-99ca-35945243e20c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:14Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190334 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190382 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190463 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771459813\\\\\\\\\\\\\\\" (2026-02-19 00:10:12 +0000 UTC to 2026-02-19 00:10:13 +0000 UTC (now=2026-02-19 00:10:14.190418013 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190344 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI0219 00:10:14.190497 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\"\\\\nI0219 00:10:14.190355 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190610 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771459814\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771459813\\\\\\\\\\\\\\\" (2026-02-18 23:10:13 +0000 UTC to 2029-02-18 23:10:13 +0000 UTC (now=2026-02-19 00:10:14.190596988 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190625 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0219 00:10:14.190637 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 00:10:14.190646 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF0219 00:10:14.191152 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.552180 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5abda7b-3e5f-4e9d-af16-8bbc3c1086b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.563741 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.573225 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.631284 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.631390 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.631429 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.631463 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.631486 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.734491 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.734580 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.734608 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.734638 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.734662 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.837882 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.837958 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.837971 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.837985 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.837995 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.848534 5108 scope.go:117] "RemoveContainer" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" Feb 19 00:10:48 crc kubenswrapper[5108]: E0219 00:10:48.848703 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.943498 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.943538 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.943549 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.943565 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5108]: I0219 00:10:48.943575 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.045987 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.046031 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.046041 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.046059 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.046070 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.148668 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.148714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.148726 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.148744 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.148755 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.250786 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.250835 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.250845 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.250859 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.250868 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.298196 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"9b8644414b23c69cc69ee1daf8f442b3f33a0c424abf081e0b094c5eb0209682"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.299982 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v42mj" event={"ID":"c8ba935e-bb01-466a-8b94-8b0c15e535b1","Type":"ContainerStarted","Data":"b3e13291cabcf2b49b52130be3a87674eb083bb09308bb95aea7cc9e7a4c8884"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.303201 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"4a55892270f0e8160d0337b19dea407b1dd375f2110553b3b39583c9c0faa57e"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.327372 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e615cef0-ed9e-4605-b2dc-a11e69dec261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff24614e85b0e24dc45e184dd221bb366397dc9b0e352bbddb3ed85a1ddd006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3124e46b1a08f8b524f8129a27b8eb0e90eb56210ead523e870ee7f48bc8447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bbddaadbfd2830d7645ccd13e8308ed1c2ad7168994ec3d45674def6664322d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b8723c2a8d4bab84c4b3fe052fd5aed103d3c7e2b43befa85e9431464ef9649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0430435d9cb0728513dab9c5ab3f3166bd21857fb3efd3439ec3ddf563ea5d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81ffc647fb4d3578a0a25fe8afbbe655da25f91b0fa4aaa3a087f49536e2057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7430780119db7f351998e16dadb8b02fa241ce8994c36a37fe98ced0044de1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d76719832478847fa91afde0d95042cfaed101c00b963e9009c610f92b18c015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.340007 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.349137 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb56v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da66974-30d5-4571-b5df-d264febc8a9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klkhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb56v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.352572 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.352618 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.352630 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.352645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.352655 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.359892 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4bc85dd-5697-4b34-acbf-7a4d2b05525a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://912093548ad39f1b40ede6b3bc22fadc53b777d2469d2d448e0f027afe0265a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://71165a4e8b9865539510ee574a6e4c02ad7804a7183a1f0362c018cb5dc60a18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99dcdf8bfe6fbb0165aa178f2a1df3a6066225eba167dfca3dc1a45802496c1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.369285 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.379871 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-v42mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ba935e-bb01-466a-8b94-8b0c15e535b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3e13291cabcf2b49b52130be3a87674eb083bb09308bb95aea7cc9e7a4c8884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:10:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nsbs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v42mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.393182 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f4459ce-0bd5-493a-813f-977d6e26f440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdz5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vk6d6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.401596 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c556da79-b025-425f-b2cd-ac55950c66cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2dh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bbrq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.410965 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b62e9d-81e7-4ca1-a374-a8e89f8afa24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef26f63e49eb1b32949af28a6174ffa743547344a0c18fc46928ad258e155404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f46b88d827d545945646be9c09948c3f502502249354399858215429560f2192\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.425350 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.436819 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.448412 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.459376 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.459447 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.459467 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.459493 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.459511 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.462391 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"995cb3be-1541-4090-83fe-8bf1a8259f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2lkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5zp6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.481543 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf757\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gxmww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.493782 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d740232-965c-462f-99ca-35945243e20c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:14Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190334 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190382 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190463 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771459813\\\\\\\\\\\\\\\" (2026-02-19 00:10:12 +0000 UTC to 2026-02-19 00:10:13 +0000 UTC (now=2026-02-19 00:10:14.190418013 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190344 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI0219 00:10:14.190497 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4136693499/tls.crt::/tmp/serving-cert-4136693499/tls.key\\\\\\\"\\\\nI0219 00:10:14.190355 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0219 00:10:14.190610 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771459814\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771459813\\\\\\\\\\\\\\\" (2026-02-18 23:10:13 +0000 UTC to 2029-02-18 23:10:13 +0000 UTC (now=2026-02-19 00:10:14.190596988 +0000 UTC))\\\\\\\"\\\\nI0219 00:10:14.190625 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0219 00:10:14.190637 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 00:10:14.190646 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF0219 00:10:14.191152 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.507140 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5abda7b-3e5f-4e9d-af16-8bbc3c1086b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a11c2239d74474a425187a8c98072dc2d815e01d359e50675a57b1af6f458e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6d55b6fae4421627760b854325dcf431dd91c546593f103196fb4b32a7ad871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d4b88be3b7bef48900b5230b1c074f3605892f2d6878417cda6e30efa11ffd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f7ef34aaf207a0b31226c4d62399ca777ebd8368addba697615daf3d66a729\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.522498 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.534106 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a3580-a7a9-49f7-8948-2d949558d2d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbf9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.543369 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsm7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97553d38-332c-4cc9-8732-5363a62dde8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://11db843111137af53b4e5d1f38695687d245abfdef88f5ada1f996625930140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:10:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcgpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsm7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.562299 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.562364 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.562375 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.562397 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.562409 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.664768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.664816 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.664839 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.664855 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.664864 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.767326 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.767398 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.767416 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.767438 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.767457 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.847783 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.847802 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:49 crc kubenswrapper[5108]: E0219 00:10:49.848306 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:49 crc kubenswrapper[5108]: E0219 00:10:49.848537 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.848604 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:49 crc kubenswrapper[5108]: E0219 00:10:49.848684 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.848583 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:49 crc kubenswrapper[5108]: E0219 00:10:49.849087 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.869559 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.869625 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.869643 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.869668 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.869686 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.972467 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.972554 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.972567 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.972588 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5108]: I0219 00:10:49.972600 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.075238 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.075630 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.075645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.075665 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.075678 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.177451 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.177492 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.177502 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.177517 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.177527 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.279719 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.279779 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.279789 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.279806 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.279816 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.309672 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"d54241667ffbc399088f5b37fdd84585d375dd62e5401cdaf757444adc708261"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.312630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"046b5e5533bf5ef195598b1a1fd1fbf16f923a752ed0065fc3b404e081e38283"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.317185 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kb56v" event={"ID":"5da66974-30d5-4571-b5df-d264febc8a9b","Type":"ContainerStarted","Data":"f34d1a75a5ac2f5ab4877a0531e7e1e6b6aa1c90d4c2f4895640fb161d7d34ac"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.357780 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-v42mj" podStartSLOduration=88.357759043 podStartE2EDuration="1m28.357759043s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.357751593 +0000 UTC m=+109.324397961" watchObservedRunningTime="2026-02-19 00:10:50.357759043 +0000 UTC m=+109.324405351" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.383092 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.383154 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.383166 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.383181 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.383191 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.411728 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" podStartSLOduration=87.411712436 podStartE2EDuration="1m27.411712436s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.411350746 +0000 UTC m=+109.377997094" watchObservedRunningTime="2026-02-19 00:10:50.411712436 +0000 UTC m=+109.378358744" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.430382 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.430365424 podStartE2EDuration="15.430365424s" podCreationTimestamp="2026-02-19 00:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.429914232 +0000 UTC m=+109.396560550" watchObservedRunningTime="2026-02-19 00:10:50.430365424 +0000 UTC m=+109.397011732" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.485068 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.485127 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.485141 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.485156 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.485169 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.499791 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podStartSLOduration=88.499768919 podStartE2EDuration="1m28.499768919s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.499575604 +0000 UTC m=+109.466221922" watchObservedRunningTime="2026-02-19 00:10:50.499768919 +0000 UTC m=+109.466415227" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.556155 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=15.556133115 podStartE2EDuration="15.556133115s" podCreationTimestamp="2026-02-19 00:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.555227501 +0000 UTC m=+109.521873849" watchObservedRunningTime="2026-02-19 00:10:50.556133115 +0000 UTC m=+109.522779453" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.587095 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.587545 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.587556 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.587571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.587582 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.594049 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vsm7k" podStartSLOduration=88.594028988 podStartE2EDuration="1m28.594028988s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.59374397 +0000 UTC m=+109.560390278" watchObservedRunningTime="2026-02-19 00:10:50.594028988 +0000 UTC m=+109.560675296" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.633825 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.633810991 podStartE2EDuration="15.633810991s" podCreationTimestamp="2026-02-19 00:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.633065361 +0000 UTC m=+109.599711689" watchObservedRunningTime="2026-02-19 00:10:50.633810991 +0000 UTC m=+109.600457299" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.677807 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=14.677789666 podStartE2EDuration="14.677789666s" podCreationTimestamp="2026-02-19 00:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.676487011 +0000 UTC m=+109.643133349" watchObservedRunningTime="2026-02-19 00:10:50.677789666 +0000 UTC m=+109.644435974" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.689865 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.689918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.689947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.689967 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.689981 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.704487 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kb56v" podStartSLOduration=88.704469709 podStartE2EDuration="1m28.704469709s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:50.704048768 +0000 UTC m=+109.670695076" watchObservedRunningTime="2026-02-19 00:10:50.704469709 +0000 UTC m=+109.671116017" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.752612 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.752647 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.752656 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.752669 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.752677 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.788462 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr"] Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.792617 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.794239 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.794664 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.794886 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.795098 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.816682 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.824913 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.935387 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b40b518e-2611-4530-ae18-12d37c8a315f-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.935877 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b40b518e-2611-4530-ae18-12d37c8a315f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.935902 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b40b518e-2611-4530-ae18-12d37c8a315f-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.935922 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40b518e-2611-4530-ae18-12d37c8a315f-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:50 crc kubenswrapper[5108]: I0219 00:10:50.935970 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b40b518e-2611-4530-ae18-12d37c8a315f-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.036735 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b40b518e-2611-4530-ae18-12d37c8a315f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.036782 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b40b518e-2611-4530-ae18-12d37c8a315f-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.036845 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b40b518e-2611-4530-ae18-12d37c8a315f-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.036885 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b40b518e-2611-4530-ae18-12d37c8a315f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.036999 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40b518e-2611-4530-ae18-12d37c8a315f-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.037060 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b40b518e-2611-4530-ae18-12d37c8a315f-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.037212 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b40b518e-2611-4530-ae18-12d37c8a315f-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.038526 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b40b518e-2611-4530-ae18-12d37c8a315f-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.044401 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40b518e-2611-4530-ae18-12d37c8a315f-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.055132 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b40b518e-2611-4530-ae18-12d37c8a315f-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-bv4xr\" (UID: \"b40b518e-2611-4530-ae18-12d37c8a315f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.109652 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" Feb 19 00:10:51 crc kubenswrapper[5108]: W0219 00:10:51.123336 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb40b518e_2611_4530_ae18_12d37c8a315f.slice/crio-c459487d51dd5e8390fca456bf89e1fac5a3467dbc0e12674de89919953c38a1 WatchSource:0}: Error finding container c459487d51dd5e8390fca456bf89e1fac5a3467dbc0e12674de89919953c38a1: Status 404 returned error can't find the container with id c459487d51dd5e8390fca456bf89e1fac5a3467dbc0e12674de89919953c38a1 Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.324134 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerStarted","Data":"351cbd98b3f9f76cfeb5fd599a7ad1ec16fbd6b31b29e7f302875930c96b92c3"} Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.325782 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"e0d8e07c045a4523ff5ff6e2575f0c99a2f2472c480c2283c285ccc69c39ae14"} Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.327273 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" event={"ID":"b40b518e-2611-4530-ae18-12d37c8a315f","Type":"ContainerStarted","Data":"62dff99fc1330398b7edc9f4d94c55471830a07d749e6a7907767077dc426b9b"} Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.327371 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" event={"ID":"b40b518e-2611-4530-ae18-12d37c8a315f","Type":"ContainerStarted","Data":"c459487d51dd5e8390fca456bf89e1fac5a3467dbc0e12674de89919953c38a1"} Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.328843 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"731eb93a0486f0c5927372d7b634ecbf47ecb122cfe3a152dc365fa9fe8c4c67"} Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.339077 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c" exitCode=0 Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.339161 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.397026 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bv4xr" podStartSLOduration=89.396990367 podStartE2EDuration="1m29.396990367s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:51.396499013 +0000 UTC m=+110.363145321" watchObservedRunningTime="2026-02-19 00:10:51.396990367 +0000 UTC m=+110.363636695" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.543215 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.543324 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.543389 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.543423 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:07.543386638 +0000 UTC m=+126.510032946 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.543425 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.543470 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.543531 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:07.543521402 +0000 UTC m=+126.510167710 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.543601 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:07.543558613 +0000 UTC m=+126.510204911 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.644738 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.644785 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.644825 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645086 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645170 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645188 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645198 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645229 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs podName:766a3580-a7a9-49f7-8948-2d949558d2d2 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:07.645195569 +0000 UTC m=+126.611841917 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs") pod "network-metrics-daemon-2clv5" (UID: "766a3580-a7a9-49f7-8948-2d949558d2d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645115 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645251 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645258 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645261 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:07.645247161 +0000 UTC m=+126.611893499 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.645289 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:07.645274011 +0000 UTC m=+126.611920319 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.851979 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.852009 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.852102 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.852115 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:51 crc kubenswrapper[5108]: I0219 00:10:51.852174 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.852273 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.852340 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:51 crc kubenswrapper[5108]: E0219 00:10:51.852390 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:52 crc kubenswrapper[5108]: I0219 00:10:52.345330 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} Feb 19 00:10:52 crc kubenswrapper[5108]: I0219 00:10:52.345393 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} Feb 19 00:10:52 crc kubenswrapper[5108]: I0219 00:10:52.345403 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} Feb 19 00:10:52 crc kubenswrapper[5108]: I0219 00:10:52.345413 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} Feb 19 00:10:52 crc kubenswrapper[5108]: I0219 00:10:52.345423 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} Feb 19 00:10:52 crc kubenswrapper[5108]: I0219 00:10:52.345432 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} Feb 19 00:10:52 crc kubenswrapper[5108]: I0219 00:10:52.346622 5108 generic.go:358] "Generic (PLEG): container finished" podID="ffe88610-b8e8-4a54-9e50-62ebbfd5d6db" containerID="351cbd98b3f9f76cfeb5fd599a7ad1ec16fbd6b31b29e7f302875930c96b92c3" exitCode=0 Feb 19 00:10:52 crc kubenswrapper[5108]: I0219 00:10:52.346665 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerDied","Data":"351cbd98b3f9f76cfeb5fd599a7ad1ec16fbd6b31b29e7f302875930c96b92c3"} Feb 19 00:10:53 crc kubenswrapper[5108]: I0219 00:10:53.353092 5108 generic.go:358] "Generic (PLEG): container finished" podID="ffe88610-b8e8-4a54-9e50-62ebbfd5d6db" containerID="ea1f0313e4d240fc00032d735c982311e93d730205d30bfafcfde51f4c9acd09" exitCode=0 Feb 19 00:10:53 crc kubenswrapper[5108]: I0219 00:10:53.353151 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerDied","Data":"ea1f0313e4d240fc00032d735c982311e93d730205d30bfafcfde51f4c9acd09"} Feb 19 00:10:53 crc kubenswrapper[5108]: I0219 00:10:53.856961 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:53 crc kubenswrapper[5108]: I0219 00:10:53.857002 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:53 crc kubenswrapper[5108]: E0219 00:10:53.857104 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:53 crc kubenswrapper[5108]: I0219 00:10:53.857217 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:53 crc kubenswrapper[5108]: I0219 00:10:53.857238 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:53 crc kubenswrapper[5108]: E0219 00:10:53.857394 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:53 crc kubenswrapper[5108]: E0219 00:10:53.857562 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:53 crc kubenswrapper[5108]: E0219 00:10:53.857811 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:54 crc kubenswrapper[5108]: I0219 00:10:54.359579 5108 generic.go:358] "Generic (PLEG): container finished" podID="ffe88610-b8e8-4a54-9e50-62ebbfd5d6db" containerID="1d9fabbcd41ef309b74045c3f0768d34aa21556e332717cafe6786c4a88cd448" exitCode=0 Feb 19 00:10:54 crc kubenswrapper[5108]: I0219 00:10:54.359709 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerDied","Data":"1d9fabbcd41ef309b74045c3f0768d34aa21556e332717cafe6786c4a88cd448"} Feb 19 00:10:54 crc kubenswrapper[5108]: I0219 00:10:54.366071 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} Feb 19 00:10:55 crc kubenswrapper[5108]: I0219 00:10:55.380655 5108 generic.go:358] "Generic (PLEG): container finished" podID="ffe88610-b8e8-4a54-9e50-62ebbfd5d6db" containerID="be15b6f2391e20b2e74446d6707dcd42fd3ce5780076a1576386fbfd8cc58051" exitCode=0 Feb 19 00:10:55 crc kubenswrapper[5108]: I0219 00:10:55.380766 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerDied","Data":"be15b6f2391e20b2e74446d6707dcd42fd3ce5780076a1576386fbfd8cc58051"} Feb 19 00:10:55 crc kubenswrapper[5108]: I0219 00:10:55.854302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:55 crc kubenswrapper[5108]: I0219 00:10:55.854358 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:55 crc kubenswrapper[5108]: E0219 00:10:55.854485 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:55 crc kubenswrapper[5108]: E0219 00:10:55.854648 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:55 crc kubenswrapper[5108]: I0219 00:10:55.854760 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:55 crc kubenswrapper[5108]: E0219 00:10:55.855029 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:55 crc kubenswrapper[5108]: I0219 00:10:55.855271 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:55 crc kubenswrapper[5108]: E0219 00:10:55.855611 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:56 crc kubenswrapper[5108]: I0219 00:10:56.390182 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerStarted","Data":"505e6e8c49e7bc262610e1c5f15e3e345fb86a1f427ce79cf20831c1976687fa"} Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.401356 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerStarted","Data":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.401877 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.408056 5108 generic.go:358] "Generic (PLEG): container finished" podID="ffe88610-b8e8-4a54-9e50-62ebbfd5d6db" containerID="505e6e8c49e7bc262610e1c5f15e3e345fb86a1f427ce79cf20831c1976687fa" exitCode=0 Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.408111 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerDied","Data":"505e6e8c49e7bc262610e1c5f15e3e345fb86a1f427ce79cf20831c1976687fa"} Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.445621 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podStartSLOduration=95.44559242 podStartE2EDuration="1m35.44559242s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:57.445440975 +0000 UTC m=+116.412087353" watchObservedRunningTime="2026-02-19 00:10:57.44559242 +0000 UTC m=+116.412238778" Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.503896 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.856098 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.856247 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:57 crc kubenswrapper[5108]: E0219 00:10:57.856785 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.856325 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:57 crc kubenswrapper[5108]: I0219 00:10:57.856299 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:57 crc kubenswrapper[5108]: E0219 00:10:57.857056 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:57 crc kubenswrapper[5108]: E0219 00:10:57.857260 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:57 crc kubenswrapper[5108]: E0219 00:10:57.858068 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:58 crc kubenswrapper[5108]: I0219 00:10:58.415468 5108 generic.go:358] "Generic (PLEG): container finished" podID="ffe88610-b8e8-4a54-9e50-62ebbfd5d6db" containerID="4c4ebe9b5c77f7526d7a26bc7841a6254849dad60da79f62ebc307dd561d465c" exitCode=0 Feb 19 00:10:58 crc kubenswrapper[5108]: I0219 00:10:58.416256 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerDied","Data":"4c4ebe9b5c77f7526d7a26bc7841a6254849dad60da79f62ebc307dd561d465c"} Feb 19 00:10:58 crc kubenswrapper[5108]: I0219 00:10:58.416369 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:58 crc kubenswrapper[5108]: I0219 00:10:58.416389 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:58 crc kubenswrapper[5108]: I0219 00:10:58.464697 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:10:59 crc kubenswrapper[5108]: I0219 00:10:59.424415 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gxmww" event={"ID":"ffe88610-b8e8-4a54-9e50-62ebbfd5d6db","Type":"ContainerStarted","Data":"81966026dab19490feadc189f9aa114859eb94f5b5249a20b5d9817594cbb303"} Feb 19 00:10:59 crc kubenswrapper[5108]: I0219 00:10:59.460416 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gxmww" podStartSLOduration=97.460396263 podStartE2EDuration="1m37.460396263s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:59.459271523 +0000 UTC m=+118.425917891" watchObservedRunningTime="2026-02-19 00:10:59.460396263 +0000 UTC m=+118.427042601" Feb 19 00:10:59 crc kubenswrapper[5108]: I0219 00:10:59.545490 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2clv5"] Feb 19 00:10:59 crc kubenswrapper[5108]: I0219 00:10:59.545758 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:10:59 crc kubenswrapper[5108]: E0219 00:10:59.545999 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:10:59 crc kubenswrapper[5108]: I0219 00:10:59.854760 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:59 crc kubenswrapper[5108]: I0219 00:10:59.854858 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:59 crc kubenswrapper[5108]: I0219 00:10:59.854908 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:59 crc kubenswrapper[5108]: E0219 00:10:59.855090 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:59 crc kubenswrapper[5108]: E0219 00:10:59.855330 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:59 crc kubenswrapper[5108]: E0219 00:10:59.855419 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:00 crc kubenswrapper[5108]: I0219 00:11:00.848149 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:00 crc kubenswrapper[5108]: E0219 00:11:00.848923 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:11:01 crc kubenswrapper[5108]: E0219 00:11:01.806680 5108 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Feb 19 00:11:01 crc kubenswrapper[5108]: I0219 00:11:01.851775 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:01 crc kubenswrapper[5108]: E0219 00:11:01.851913 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:01 crc kubenswrapper[5108]: I0219 00:11:01.852397 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:01 crc kubenswrapper[5108]: E0219 00:11:01.852483 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:01 crc kubenswrapper[5108]: I0219 00:11:01.852608 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:01 crc kubenswrapper[5108]: E0219 00:11:01.852682 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:01 crc kubenswrapper[5108]: E0219 00:11:01.915272 5108 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 00:11:02 crc kubenswrapper[5108]: I0219 00:11:02.847177 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:02 crc kubenswrapper[5108]: E0219 00:11:02.847386 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:11:03 crc kubenswrapper[5108]: I0219 00:11:03.858535 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:03 crc kubenswrapper[5108]: E0219 00:11:03.858757 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:03 crc kubenswrapper[5108]: I0219 00:11:03.859509 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:03 crc kubenswrapper[5108]: E0219 00:11:03.859670 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:03 crc kubenswrapper[5108]: I0219 00:11:03.860878 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:03 crc kubenswrapper[5108]: E0219 00:11:03.861148 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:03 crc kubenswrapper[5108]: I0219 00:11:03.861670 5108 scope.go:117] "RemoveContainer" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" Feb 19 00:11:04 crc kubenswrapper[5108]: I0219 00:11:04.449468 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 19 00:11:04 crc kubenswrapper[5108]: I0219 00:11:04.453151 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa"} Feb 19 00:11:04 crc kubenswrapper[5108]: I0219 00:11:04.453823 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:04 crc kubenswrapper[5108]: I0219 00:11:04.847965 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:04 crc kubenswrapper[5108]: E0219 00:11:04.848261 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:11:05 crc kubenswrapper[5108]: I0219 00:11:05.847499 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:05 crc kubenswrapper[5108]: I0219 00:11:05.847534 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:05 crc kubenswrapper[5108]: I0219 00:11:05.847499 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:05 crc kubenswrapper[5108]: E0219 00:11:05.847642 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:05 crc kubenswrapper[5108]: E0219 00:11:05.847717 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:05 crc kubenswrapper[5108]: E0219 00:11:05.847784 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:06 crc kubenswrapper[5108]: I0219 00:11:06.847822 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:06 crc kubenswrapper[5108]: E0219 00:11:06.848074 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clv5" podUID="766a3580-a7a9-49f7-8948-2d949558d2d2" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.548482 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.548642 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.548725 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.548828 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.548795347 +0000 UTC m=+158.515441655 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.548888 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.548910 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.549044 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.549014394 +0000 UTC m=+158.515660772 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.549083 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.549059015 +0000 UTC m=+158.515705333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.650054 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.650175 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.650238 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650380 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650440 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650470 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650487 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650525 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs podName:766a3580-a7a9-49f7-8948-2d949558d2d2 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.650494766 +0000 UTC m=+158.617141104 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs") pod "network-metrics-daemon-2clv5" (UID: "766a3580-a7a9-49f7-8948-2d949558d2d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650525 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650576 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.650551247 +0000 UTC m=+158.617197575 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650581 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650606 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:07 crc kubenswrapper[5108]: E0219 00:11:07.650719 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.650694881 +0000 UTC m=+158.617341239 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.847796 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.847863 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.847868 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.851351 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.851414 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.853041 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 19 00:11:07 crc kubenswrapper[5108]: I0219 00:11:07.854877 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 19 00:11:08 crc kubenswrapper[5108]: I0219 00:11:08.847644 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:08 crc kubenswrapper[5108]: I0219 00:11:08.850553 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 19 00:11:08 crc kubenswrapper[5108]: I0219 00:11:08.851047 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.911753 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.949103 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=35.949080488 podStartE2EDuration="35.949080488s" podCreationTimestamp="2026-02-19 00:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:04.491649399 +0000 UTC m=+123.458295747" watchObservedRunningTime="2026-02-19 00:11:10.949080488 +0000 UTC m=+129.915726796" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.949305 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qxx5n"] Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.978013 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-lhp9s"] Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.978227 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.987347 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.990914 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp"] Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.993544 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.993695 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.993822 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.994113 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.995624 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 19 00:11:10 crc kubenswrapper[5108]: I0219 00:11:10.996778 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.002229 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.005468 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.005591 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.005796 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.005894 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.006027 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.006095 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.006238 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.006324 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.007085 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.007561 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.007849 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.008159 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.006924 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.009429 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.010188 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.014820 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29524320-mpp5j"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.015834 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.017358 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.017434 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.017510 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.017756 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.018169 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.018264 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.018339 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.018928 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.018958 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.019069 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qv7jb"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.019220 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.019836 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.019884 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.024729 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.031593 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-hhd9x"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.032644 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.033672 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.033812 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.033981 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.034201 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.034282 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.034438 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.034655 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.034739 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.034833 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.034907 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.035021 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.035091 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.035176 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.035249 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.035320 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.035406 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.035494 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.038289 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.038508 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.038992 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.038998 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.040007 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.041234 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.041947 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.042387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.044803 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.058199 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.066532 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.066706 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.066982 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.067433 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.067468 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.067587 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.069054 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.069068 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.069204 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.069344 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.087912 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.087966 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.088953 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.089066 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.089134 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.089423 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.089999 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.091063 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.091645 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.091826 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.094709 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.096081 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.096409 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.096589 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.096772 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.096911 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.098279 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099423 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099671 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-config\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099723 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099752 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5skf4\" (UniqueName: \"kubernetes.io/projected/603b852f-0dcf-40af-b879-4df324bb8326-kube-api-access-5skf4\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-serving-cert\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099789 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/603b852f-0dcf-40af-b879-4df324bb8326-tmp\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099807 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-image-import-ca\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099842 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099861 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099880 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lw7m\" (UniqueName: \"kubernetes.io/projected/da814e69-94f0-4857-92fe-048de6d4b60d-kube-api-access-8lw7m\") pod \"cluster-samples-operator-6b564684c8-wjnbj\" (UID: \"da814e69-94f0-4857-92fe-048de6d4b60d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099891 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.099899 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcqk\" (UniqueName: \"kubernetes.io/projected/76d1bae7-e54a-44be-9688-fcce4fd96146-kube-api-access-vvcqk\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100297 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/62ae86d1-5727-4420-9503-8d2aa58266ff-tmp\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100316 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-config\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100331 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5775541-9300-4451-95dd-cb81bd25dd50-audit-dir\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100344 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100359 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100364 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100374 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-audit-policies\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100626 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-encryption-config\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100667 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100692 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-audit\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100718 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqk7b\" (UniqueName: \"kubernetes.io/projected/347e23fe-fd18-4ee1-a333-1302eefd97e8-kube-api-access-kqk7b\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100744 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/347e23fe-fd18-4ee1-a333-1302eefd97e8-node-pullsecrets\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100765 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100774 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-encryption-config\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100835 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mbj6\" (UniqueName: \"kubernetes.io/projected/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-kube-api-access-6mbj6\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100862 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-config\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100900 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-config\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100922 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.100997 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q42td\" (UniqueName: \"kubernetes.io/projected/b5775541-9300-4451-95dd-cb81bd25dd50-kube-api-access-q42td\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101049 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101072 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-etcd-client\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101102 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5336aa1a-347f-403d-8bb6-882d11120822-serviceca\") pod \"image-pruner-29524320-mpp5j\" (UID: \"5336aa1a-347f-403d-8bb6-882d11120822\") " pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101151 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101175 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62ae86d1-5727-4420-9503-8d2aa58266ff-serving-cert\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101205 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6v9g\" (UniqueName: \"kubernetes.io/projected/960bf537-20fc-4209-b634-54e0046436b3-kube-api-access-r6v9g\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101233 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101260 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-client-ca\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101378 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101415 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjnl9\" (UniqueName: \"kubernetes.io/projected/5336aa1a-347f-403d-8bb6-882d11120822-kube-api-access-hjnl9\") pod \"image-pruner-29524320-mpp5j\" (UID: \"5336aa1a-347f-403d-8bb6-882d11120822\") " pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101473 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101501 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-client-ca\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101542 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101561 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101579 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/da814e69-94f0-4857-92fe-048de6d4b60d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wjnbj\" (UID: \"da814e69-94f0-4857-92fe-048de6d4b60d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101598 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwb8\" (UniqueName: \"kubernetes.io/projected/62ae86d1-5727-4420-9503-8d2aa58266ff-kube-api-access-nnwb8\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101613 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101629 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-serving-cert\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101649 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/347e23fe-fd18-4ee1-a333-1302eefd97e8-audit-dir\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101683 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603b852f-0dcf-40af-b879-4df324bb8326-serving-cert\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101703 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101742 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/960bf537-20fc-4209-b634-54e0046436b3-audit-dir\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101769 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101786 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101805 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-trusted-ca-bundle\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101822 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-audit-policies\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101838 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101859 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101874 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d1bae7-e54a-44be-9688-fcce4fd96146-serving-cert\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101893 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-etcd-client\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.101907 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-etcd-serving-ca\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.103094 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.105237 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.108546 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.111174 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.111867 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.118009 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-6vnnq"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.118153 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.118464 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.118708 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.119094 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.119225 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.119684 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.119789 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.120392 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.120663 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.120785 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.120993 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.121230 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.121460 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.121888 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.122599 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.123202 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.123426 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.123473 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.123614 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.123733 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.124160 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-6vnnq" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.126690 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-kv2lw"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.128171 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.128318 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.131770 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.132256 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.132615 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.135976 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.136428 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.142958 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.143688 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k745b"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.143988 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.152715 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-9dxbw"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.152847 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.159158 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-w5c5q"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.161503 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.161649 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.167974 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-n8lfg"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.168337 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.172832 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.173068 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.178729 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.178924 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.182074 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.182407 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.185184 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-dnr7x"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.185377 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.189655 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.189829 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.192634 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.192793 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.197629 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.197653 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.198084 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.201100 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.201442 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.201515 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.203841 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-7f8nt"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.203959 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-certificates\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204034 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603b852f-0dcf-40af-b879-4df324bb8326-serving-cert\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204061 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204095 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/960bf537-20fc-4209-b634-54e0046436b3-audit-dir\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204127 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204158 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204183 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-trusted-ca-bundle\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204205 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tlr2\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-kube-api-access-8tlr2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204229 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-auth-proxy-config\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204249 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204280 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-audit-policies\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204302 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204332 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204362 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d1bae7-e54a-44be-9688-fcce4fd96146-serving-cert\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.204719 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.205166 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-trusted-ca-bundle\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.205268 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-etcd-client\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210360 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-audit-policies\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210467 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-etcd-serving-ca\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210506 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-config\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210598 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/223e4146-2005-4ad4-8fff-1d248c0f8a4d-installation-pull-secrets\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210646 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-client\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210808 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210842 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5skf4\" (UniqueName: \"kubernetes.io/projected/603b852f-0dcf-40af-b879-4df324bb8326-kube-api-access-5skf4\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-serving-cert\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.210907 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff64b385-3fa7-412e-8e7c-a465f30f98e3-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.211151 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/603b852f-0dcf-40af-b879-4df324bb8326-tmp\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.211182 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-image-import-ca\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.211870 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d1bae7-e54a-44be-9688-fcce4fd96146-serving-cert\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.212289 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603b852f-0dcf-40af-b879-4df324bb8326-serving-cert\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.212863 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/603b852f-0dcf-40af-b879-4df324bb8326-tmp\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.213578 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-etcd-serving-ca\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.214059 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/960bf537-20fc-4209-b634-54e0046436b3-audit-dir\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.214491 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-etcd-client\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.215220 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-config\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.215675 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-image-import-ca\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.215749 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.216625 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.216778 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-serving-cert\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.216837 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.217414 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8lw7m\" (UniqueName: \"kubernetes.io/projected/da814e69-94f0-4857-92fe-048de6d4b60d-kube-api-access-8lw7m\") pod \"cluster-samples-operator-6b564684c8-wjnbj\" (UID: \"da814e69-94f0-4857-92fe-048de6d4b60d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.218508 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.218774 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.218977 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcqk\" (UniqueName: \"kubernetes.io/projected/76d1bae7-e54a-44be-9688-fcce4fd96146-kube-api-access-vvcqk\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219032 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/62ae86d1-5727-4420-9503-8d2aa58266ff-tmp\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219072 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-config\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219134 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5775541-9300-4451-95dd-cb81bd25dd50-audit-dir\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219169 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219200 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219227 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-audit-policies\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219318 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-encryption-config\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219351 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff64b385-3fa7-412e-8e7c-a465f30f98e3-config\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219379 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hc76\" (UniqueName: \"kubernetes.io/projected/ff64b385-3fa7-412e-8e7c-a465f30f98e3-kube-api-access-9hc76\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219409 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f121a3-1529-44dd-b4b4-18165c6865b0-serving-cert\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219423 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219442 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219478 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-audit\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219512 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kqk7b\" (UniqueName: \"kubernetes.io/projected/347e23fe-fd18-4ee1-a333-1302eefd97e8-kube-api-access-kqk7b\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219540 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/347e23fe-fd18-4ee1-a333-1302eefd97e8-node-pullsecrets\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219568 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-encryption-config\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.219881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/960bf537-20fc-4209-b634-54e0046436b3-audit-policies\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220001 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-ca\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220035 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b9f121a3-1529-44dd-b4b4-18165c6865b0-tmp-dir\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220075 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-config\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220107 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6mbj6\" (UniqueName: \"kubernetes.io/projected/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-kube-api-access-6mbj6\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220133 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-config\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220214 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-machine-approver-tls\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220258 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220292 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-config\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220319 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220370 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q42td\" (UniqueName: \"kubernetes.io/projected/b5775541-9300-4451-95dd-cb81bd25dd50-kube-api-access-q42td\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220498 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220523 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-etcd-client\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220543 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5336aa1a-347f-403d-8bb6-882d11120822-serviceca\") pod \"image-pruner-29524320-mpp5j\" (UID: \"5336aa1a-347f-403d-8bb6-882d11120822\") " pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220564 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwlf\" (UniqueName: \"kubernetes.io/projected/b9f121a3-1529-44dd-b4b4-18165c6865b0-kube-api-access-gbwlf\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220677 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220773 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220845 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62ae86d1-5727-4420-9503-8d2aa58266ff-serving-cert\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220884 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6v9g\" (UniqueName: \"kubernetes.io/projected/960bf537-20fc-4209-b634-54e0046436b3-kube-api-access-r6v9g\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220914 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-tls\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.220985 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6w54\" (UniqueName: \"kubernetes.io/projected/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-kube-api-access-g6w54\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.221019 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.221053 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:11.721031786 +0000 UTC m=+130.687678094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.221077 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-audit\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.221099 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-client-ca\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.221522 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.221702 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-config\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.222432 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-client-ca\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.222601 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.222631 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.223120 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/62ae86d1-5727-4420-9503-8d2aa58266ff-tmp\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.223172 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.223215 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5775541-9300-4451-95dd-cb81bd25dd50-audit-dir\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.226236 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-config\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.226461 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.226784 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.226822 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-config\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.226765 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.227029 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.227304 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.228166 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hjnl9\" (UniqueName: \"kubernetes.io/projected/5336aa1a-347f-403d-8bb6-882d11120822-kube-api-access-hjnl9\") pod \"image-pruner-29524320-mpp5j\" (UID: \"5336aa1a-347f-403d-8bb6-882d11120822\") " pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.228425 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.228534 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-client-ca\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.228678 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.228761 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.228794 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/da814e69-94f0-4857-92fe-048de6d4b60d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wjnbj\" (UID: \"da814e69-94f0-4857-92fe-048de6d4b60d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229015 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nnwb8\" (UniqueName: \"kubernetes.io/projected/62ae86d1-5727-4420-9503-8d2aa58266ff-kube-api-access-nnwb8\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229148 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/223e4146-2005-4ad4-8fff-1d248c0f8a4d-ca-trust-extracted\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229279 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229412 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-bound-sa-token\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229563 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-config\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229615 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/347e23fe-fd18-4ee1-a333-1302eefd97e8-node-pullsecrets\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229290 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/347e23fe-fd18-4ee1-a333-1302eefd97e8-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229768 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-serving-cert\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229954 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/347e23fe-fd18-4ee1-a333-1302eefd97e8-audit-dir\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.229974 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.230275 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-trusted-ca\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.230757 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5336aa1a-347f-403d-8bb6-882d11120822-serviceca\") pod \"image-pruner-29524320-mpp5j\" (UID: \"5336aa1a-347f-403d-8bb6-882d11120822\") " pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.230854 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.230980 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/347e23fe-fd18-4ee1-a333-1302eefd97e8-audit-dir\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.231001 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-client-ca\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.230347 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.231442 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.231979 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d1bae7-e54a-44be-9688-fcce4fd96146-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.232457 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-etcd-client\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.232661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.233280 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62ae86d1-5727-4420-9503-8d2aa58266ff-serving-cert\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.235161 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.235713 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-encryption-config\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.236003 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/da814e69-94f0-4857-92fe-048de6d4b60d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wjnbj\" (UID: \"da814e69-94f0-4857-92fe-048de6d4b60d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.236944 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.237061 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.237165 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/960bf537-20fc-4209-b634-54e0046436b3-encryption-config\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.237316 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.239337 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/347e23fe-fd18-4ee1-a333-1302eefd97e8-serving-cert\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.240981 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.244560 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.244737 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dtlcj"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.245153 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.248382 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.248550 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.250912 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zt82k"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.251225 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.254153 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.254310 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.260634 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29524320-mpp5j"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.260666 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4lmnf"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.260902 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.261092 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.265610 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qxx5n"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.265639 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.265655 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qv7jb"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.265665 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.265679 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xgf9n"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.265842 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268290 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268320 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268331 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268340 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-hhd9x"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268348 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268356 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-dnr7x"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268364 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268373 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-6vnnq"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268381 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268388 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268397 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268407 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268414 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268422 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268431 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xgf9n" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268436 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-lhp9s"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268694 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268727 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268767 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k745b"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268784 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268798 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268817 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4lmnf"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.268834 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dhrv8"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.272766 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zxqlg"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.272922 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276097 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-9dxbw"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276129 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-kv2lw"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276145 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-w5c5q"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276162 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xgf9n"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276177 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276191 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276203 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276217 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zt82k"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276367 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dtlcj"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276395 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276401 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276543 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zxqlg"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276558 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276578 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-7f8nt"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.276594 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zcpqk"] Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.279799 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.280439 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.300285 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.320760 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331359 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.331497 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:11.831472977 +0000 UTC m=+130.798119285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff64b385-3fa7-412e-8e7c-a465f30f98e3-config\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331733 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hc76\" (UniqueName: \"kubernetes.io/projected/ff64b385-3fa7-412e-8e7c-a465f30f98e3-kube-api-access-9hc76\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331754 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f121a3-1529-44dd-b4b4-18165c6865b0-serving-cert\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331778 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-ca\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331798 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b9f121a3-1529-44dd-b4b4-18165c6865b0-tmp-dir\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331892 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-config\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331949 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-machine-approver-tls\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.331986 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.332249 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:11.832235448 +0000 UTC m=+130.798881746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332351 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b9f121a3-1529-44dd-b4b4-18165c6865b0-tmp-dir\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332353 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwlf\" (UniqueName: \"kubernetes.io/projected/b9f121a3-1529-44dd-b4b4-18165c6865b0-kube-api-access-gbwlf\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332422 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-tls\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332524 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6w54\" (UniqueName: \"kubernetes.io/projected/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-kube-api-access-g6w54\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332586 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/223e4146-2005-4ad4-8fff-1d248c0f8a4d-ca-trust-extracted\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332642 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-config\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332666 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-bound-sa-token\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332768 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-config\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332867 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff64b385-3fa7-412e-8e7c-a465f30f98e3-config\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-trusted-ca\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.332967 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/223e4146-2005-4ad4-8fff-1d248c0f8a4d-ca-trust-extracted\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.333054 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-certificates\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.333117 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-ca\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.333411 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-config\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.333367 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8tlr2\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-kube-api-access-8tlr2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.333693 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-auth-proxy-config\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.333872 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.333951 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-trusted-ca\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.334010 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/223e4146-2005-4ad4-8fff-1d248c0f8a4d-installation-pull-secrets\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.334062 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-client\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.334164 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff64b385-3fa7-412e-8e7c-a465f30f98e3-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.334234 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-certificates\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.334504 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.334803 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-auth-proxy-config\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.336918 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-tls\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.337297 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9f121a3-1529-44dd-b4b4-18165c6865b0-etcd-client\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.337780 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/223e4146-2005-4ad4-8fff-1d248c0f8a4d-installation-pull-secrets\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.338043 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f121a3-1529-44dd-b4b4-18165c6865b0-serving-cert\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.338745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-machine-approver-tls\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.340120 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.340508 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff64b385-3fa7-412e-8e7c-a465f30f98e3-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.362056 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.380102 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.402053 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.432255 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.435220 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.435516 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:11.935422165 +0000 UTC m=+130.902068473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.435773 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.436124 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:11.936112084 +0000 UTC m=+130.902758392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.441047 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.462434 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.481742 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.508817 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.520717 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.536624 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.536787 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.036762184 +0000 UTC m=+131.003408492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.537173 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.537443 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.037436531 +0000 UTC m=+131.004082839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.540899 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.560760 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.580490 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.601772 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.621475 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.637799 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.638085 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.138068311 +0000 UTC m=+131.104714619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.641345 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.667229 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.680395 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.700975 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.720986 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.739611 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.740139 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.240113138 +0000 UTC m=+131.206759486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.740907 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.762031 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.781101 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.801827 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.821702 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.840154 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.840367 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.340334406 +0000 UTC m=+131.306980734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.840670 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.840803 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.841448 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.341420995 +0000 UTC m=+131.308067333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.860633 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.881186 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.903049 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.920698 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.940550 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.942268 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.942427 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.442397833 +0000 UTC m=+131.409044171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.942727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:11 crc kubenswrapper[5108]: E0219 00:11:11.943119 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.443110763 +0000 UTC m=+131.409757071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.960355 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 19 00:11:11 crc kubenswrapper[5108]: I0219 00:11:11.980386 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.001734 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.020437 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.044029 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.044171 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.544146273 +0000 UTC m=+131.510792591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.044401 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.044746 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.544738868 +0000 UTC m=+131.511385176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.049342 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.060213 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.101155 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.120508 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.139791 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.145057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.145463 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.645428189 +0000 UTC m=+131.612074507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.145786 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.146280 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.646257782 +0000 UTC m=+131.612904150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.161639 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.181828 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.199512 5108 request.go:752] "Waited before sending request" delay="1.008765879s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.201642 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.221193 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.241397 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.246579 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.246756 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.746726847 +0000 UTC m=+131.713373165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.246888 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.247265 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.74724955 +0000 UTC m=+131.713895868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.260860 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.281505 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.301636 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.321300 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.341460 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.348238 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.348519 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.848487666 +0000 UTC m=+131.815133994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.361445 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.381366 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.401591 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.422043 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.441505 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.449780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.450899 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:12.950862302 +0000 UTC m=+131.917508640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.461570 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.481755 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.502630 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.521629 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.541315 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.551545 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.551786 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.051758699 +0000 UTC m=+132.018405007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.552287 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.552727 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.052715884 +0000 UTC m=+132.019362202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.561697 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.609891 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5skf4\" (UniqueName: \"kubernetes.io/projected/603b852f-0dcf-40af-b879-4df324bb8326-kube-api-access-5skf4\") pod \"controller-manager-65b6cccf98-qxx5n\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.629137 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lw7m\" (UniqueName: \"kubernetes.io/projected/da814e69-94f0-4857-92fe-048de6d4b60d-kube-api-access-8lw7m\") pod \"cluster-samples-operator-6b564684c8-wjnbj\" (UID: \"da814e69-94f0-4857-92fe-048de6d4b60d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.640771 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqk7b\" (UniqueName: \"kubernetes.io/projected/347e23fe-fd18-4ee1-a333-1302eefd97e8-kube-api-access-kqk7b\") pod \"apiserver-9ddfb9f55-lhp9s\" (UID: \"347e23fe-fd18-4ee1-a333-1302eefd97e8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.653882 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.654060 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.154036029 +0000 UTC m=+132.120682347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.654190 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.654501 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.154493751 +0000 UTC m=+132.121140059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.660134 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.665728 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q42td\" (UniqueName: \"kubernetes.io/projected/b5775541-9300-4451-95dd-cb81bd25dd50-kube-api-access-q42td\") pod \"oauth-openshift-66458b6674-hhd9x\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.670427 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.681175 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.702411 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.720812 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.740706 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.755601 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.755762 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.255733993 +0000 UTC m=+132.222380301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.756160 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.756586 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.256575866 +0000 UTC m=+132.223222184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.801960 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6v9g\" (UniqueName: \"kubernetes.io/projected/960bf537-20fc-4209-b634-54e0046436b3-kube-api-access-r6v9g\") pod \"apiserver-8596bd845d-9qw7s\" (UID: \"960bf537-20fc-4209-b634-54e0046436b3\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.816673 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.820113 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.825952 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.841136 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mbj6\" (UniqueName: \"kubernetes.io/projected/42b3d96e-f22d-424a-8faa-edc7ca0b5fb4-kube-api-access-6mbj6\") pod \"cluster-image-registry-operator-86c45576b9-gp96k\" (UID: \"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.857479 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.857706 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.357676334 +0000 UTC m=+132.324322642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.858124 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.858741 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.358719832 +0000 UTC m=+132.325366140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.866773 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-hhd9x"] Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.870583 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjnl9\" (UniqueName: \"kubernetes.io/projected/5336aa1a-347f-403d-8bb6-882d11120822-kube-api-access-hjnl9\") pod \"image-pruner-29524320-mpp5j\" (UID: \"5336aa1a-347f-403d-8bb6-882d11120822\") " pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.878455 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnwb8\" (UniqueName: \"kubernetes.io/projected/62ae86d1-5727-4420-9503-8d2aa58266ff-kube-api-access-nnwb8\") pod \"route-controller-manager-776cdc94d6-lkp65\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.892829 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.901683 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.902631 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcqk\" (UniqueName: \"kubernetes.io/projected/76d1bae7-e54a-44be-9688-fcce4fd96146-kube-api-access-vvcqk\") pod \"authentication-operator-7f5c659b84-8hsrp\" (UID: \"76d1bae7-e54a-44be-9688-fcce4fd96146\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.921362 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.923862 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.940872 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.955378 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.959269 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:12 crc kubenswrapper[5108]: E0219 00:11:12.959882 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.459864982 +0000 UTC m=+132.426511290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.962141 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.984819 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.987949 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:12 crc kubenswrapper[5108]: I0219 00:11:12.994691 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.001121 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.021837 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.044294 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.066579 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.072032 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.072466 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.572450015 +0000 UTC m=+132.539096323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.079589 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qxx5n"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.084345 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.101409 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.121751 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.138241 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-lhp9s"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.149417 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.154130 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s"] Feb 19 00:11:13 crc kubenswrapper[5108]: W0219 00:11:13.157897 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod347e23fe_fd18_4ee1_a333_1302eefd97e8.slice/crio-c8d4a09a365ca8e18e178ca1c4860100bc4058ef78fca0efe77dec3d5eda2ba0 WatchSource:0}: Error finding container c8d4a09a365ca8e18e178ca1c4860100bc4058ef78fca0efe77dec3d5eda2ba0: Status 404 returned error can't find the container with id c8d4a09a365ca8e18e178ca1c4860100bc4058ef78fca0efe77dec3d5eda2ba0 Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.159036 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.160566 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.171734 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.173067 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.173309 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.673293477 +0000 UTC m=+132.639939785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.183380 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.199253 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29524320-mpp5j"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.202251 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.226450 5108 request.go:752] "Waited before sending request" delay="1.957806213s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-9pgs7&limit=500&resourceVersion=0" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.230081 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.247749 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.260519 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.262346 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.275037 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.275378 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.775364272 +0000 UTC m=+132.742010580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.283258 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.295714 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.300373 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 19 00:11:13 crc kubenswrapper[5108]: W0219 00:11:13.319213 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42b3d96e_f22d_424a_8faa_edc7ca0b5fb4.slice/crio-d65540c7d0a1ebee7752b9015b09169ae82d81d47812f77b22b41e7d98320055 WatchSource:0}: Error finding container d65540c7d0a1ebee7752b9015b09169ae82d81d47812f77b22b41e7d98320055: Status 404 returned error can't find the container with id d65540c7d0a1ebee7752b9015b09169ae82d81d47812f77b22b41e7d98320055 Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.321442 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.341687 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.360618 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.376723 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.377093 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.877055066 +0000 UTC m=+132.843701364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.377384 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.377962 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.877867908 +0000 UTC m=+132.844514216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.381107 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.398819 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.403098 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 19 00:11:13 crc kubenswrapper[5108]: W0219 00:11:13.409376 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d1bae7_e54a_44be_9688_fcce4fd96146.slice/crio-e2e6698694fb60714f3110da1aa3c1bd568a2f89b7d236a1092e9007fc441dc7 WatchSource:0}: Error finding container e2e6698694fb60714f3110da1aa3c1bd568a2f89b7d236a1092e9007fc441dc7: Status 404 returned error can't find the container with id e2e6698694fb60714f3110da1aa3c1bd568a2f89b7d236a1092e9007fc441dc7 Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.421623 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.461550 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hc76\" (UniqueName: \"kubernetes.io/projected/ff64b385-3fa7-412e-8e7c-a465f30f98e3-kube-api-access-9hc76\") pod \"openshift-apiserver-operator-846cbfc458-2rb48\" (UID: \"ff64b385-3fa7-412e-8e7c-a465f30f98e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.479602 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwlf\" (UniqueName: \"kubernetes.io/projected/b9f121a3-1529-44dd-b4b4-18165c6865b0-kube-api-access-gbwlf\") pod \"etcd-operator-69b85846b6-nhp7g\" (UID: \"b9f121a3-1529-44dd-b4b4-18165c6865b0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.481707 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.981677419 +0000 UTC m=+132.948323727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.485006 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.485620 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.486033 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:13.986016054 +0000 UTC m=+132.952662362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.504370 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6w54\" (UniqueName: \"kubernetes.io/projected/a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa-kube-api-access-g6w54\") pod \"machine-approver-54c688565-ghtdz\" (UID: \"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.504729 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.528036 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-bound-sa-token\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.535521 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tlr2\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-kube-api-access-8tlr2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: W0219 00:11:13.536562 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5fa619f_1ea3_4237_8fbe_6b1a821d5bfa.slice/crio-ce9f9c47ca1d30d0b79b96fc6d826b921abc596358b1ebc498832bdf6c947702 WatchSource:0}: Error finding container ce9f9c47ca1d30d0b79b96fc6d826b921abc596358b1ebc498832bdf6c947702: Status 404 returned error can't find the container with id ce9f9c47ca1d30d0b79b96fc6d826b921abc596358b1ebc498832bdf6c947702 Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.543706 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-mpp5j" event={"ID":"5336aa1a-347f-403d-8bb6-882d11120822","Type":"ContainerStarted","Data":"5ce9ca5c0a8abd5dc27f4dec993e9cc0bad46b6f4f8c8216af8046b994868f1e"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.543754 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-mpp5j" event={"ID":"5336aa1a-347f-403d-8bb6-882d11120822","Type":"ContainerStarted","Data":"94d45cfb870acf3761a8b312f729cdf86ac5869246eaedafefca83782cf8b237"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.545858 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" event={"ID":"da814e69-94f0-4857-92fe-048de6d4b60d","Type":"ContainerStarted","Data":"a56a9b1bbc9a57f207a64f678fe39eae87058a1865549982bd1bd33b28a16a83"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.551251 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" event={"ID":"960bf537-20fc-4209-b634-54e0046436b3","Type":"ContainerStarted","Data":"6925f3238f91e43373a66d6cb9b263423d5b88e17d421ad8c2b7770709569d70"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.554351 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" event={"ID":"603b852f-0dcf-40af-b879-4df324bb8326","Type":"ContainerStarted","Data":"4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.554403 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" event={"ID":"603b852f-0dcf-40af-b879-4df324bb8326","Type":"ContainerStarted","Data":"3bfac49b1f0e29d933f1140c8a416a5e9865655f2ee6d7948a482ce1d9cc94bd"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.556910 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" event={"ID":"62ae86d1-5727-4420-9503-8d2aa58266ff","Type":"ContainerStarted","Data":"7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.556975 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" event={"ID":"62ae86d1-5727-4420-9503-8d2aa58266ff","Type":"ContainerStarted","Data":"5f411d8b755708fe88d577ddb3488c0ded653dbf4a62f478b941011ac6833e4e"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.562609 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" event={"ID":"347e23fe-fd18-4ee1-a333-1302eefd97e8","Type":"ContainerStarted","Data":"c8d4a09a365ca8e18e178ca1c4860100bc4058ef78fca0efe77dec3d5eda2ba0"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.563726 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.564908 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" event={"ID":"b5775541-9300-4451-95dd-cb81bd25dd50","Type":"ContainerStarted","Data":"0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.564969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" event={"ID":"b5775541-9300-4451-95dd-cb81bd25dd50","Type":"ContainerStarted","Data":"3bb9b7133ad080274f0d30bd164c58934f68f5711d34aa25c60fe4320840d379"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.567851 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" event={"ID":"76d1bae7-e54a-44be-9688-fcce4fd96146","Type":"ContainerStarted","Data":"e2e6698694fb60714f3110da1aa3c1bd568a2f89b7d236a1092e9007fc441dc7"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.569554 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" event={"ID":"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4","Type":"ContainerStarted","Data":"d6e5b63bbbc7aa4604a75487f7e35f7dee06fbb406fca002514808f9081ec229"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.569627 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" event={"ID":"42b3d96e-f22d-424a-8faa-edc7ca0b5fb4","Type":"ContainerStarted","Data":"d65540c7d0a1ebee7752b9015b09169ae82d81d47812f77b22b41e7d98320055"} Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.575797 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.575845 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.575861 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.577758 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-lkp65 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.577821 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" podUID="62ae86d1-5727-4420-9503-8d2aa58266ff" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.577869 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-qxx5n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.577913 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" podUID="603b852f-0dcf-40af-b879-4df324bb8326" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.578552 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-hhd9x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.578594 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" podUID="b5775541-9300-4451-95dd-cb81bd25dd50" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586242 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.586412 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.086386834 +0000 UTC m=+133.053033142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586468 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-config\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586501 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-console-config\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586527 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-service-ca\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586671 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-oauth-serving-cert\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586698 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dda1c305-da89-4c31-a229-073abe8757de-srv-cert\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586738 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8df7b2bf-ae29-417e-a699-a4d6140db6ff-serving-cert\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586834 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/98aac6ae-e129-4ce6-9b45-3eb23232be7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2554r\" (UID: \"98aac6ae-e129-4ce6-9b45-3eb23232be7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586893 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgb82\" (UniqueName: \"kubernetes.io/projected/cecf671f-2c8e-4821-8047-f740b18c3d04-kube-api-access-mgb82\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586948 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cb0844-2028-4cfa-acba-18e5d2c57986-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.586982 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xk6j\" (UniqueName: \"kubernetes.io/projected/1972f121-c7ba-4edb-817f-093975dff371-kube-api-access-9xk6j\") pod \"downloads-747b44746d-6vnnq\" (UID: \"1972f121-c7ba-4edb-817f-093975dff371\") " pod="openshift-console/downloads-747b44746d-6vnnq" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587012 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82mjr\" (UniqueName: \"kubernetes.io/projected/dda1c305-da89-4c31-a229-073abe8757de-kube-api-access-82mjr\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587088 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9drm\" (UniqueName: \"kubernetes.io/projected/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-kube-api-access-p9drm\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587166 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-trusted-ca\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587213 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-metrics-tls\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587276 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ccd36e-bf71-4a9b-93e5-8e972ecef049-config\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587326 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-config\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587354 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn66d\" (UniqueName: \"kubernetes.io/projected/8df7b2bf-ae29-417e-a699-a4d6140db6ff-kube-api-access-nn66d\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587385 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587411 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/360f6faf-c020-47cd-9b9e-3b931df6bf11-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587441 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d20929d-50ab-4bea-8fe0-c3963930537f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-rzqzz\" (UID: \"7d20929d-50ab-4bea-8fe0-c3963930537f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587471 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7bxd\" (UniqueName: \"kubernetes.io/projected/1a52a4e5-9502-4222-8090-3c18943abd74-kube-api-access-r7bxd\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587505 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/708be0b2-c6b4-4167-a1cb-e71e5c078013-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587532 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-signing-key\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587607 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba37545-146a-4d15-8fc4-4a3c3ef1efab-config\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587640 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac53ce4-90c0-4d12-8250-97e095faa921-config\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587695 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-config\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587773 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-signing-cabundle\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.587804 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpssn\" (UniqueName: \"kubernetes.io/projected/5af44a88-046f-4a49-aa06-a2cdf10eb333-kube-api-access-zpssn\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.589573 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ccd36e-bf71-4a9b-93e5-8e972ecef049-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.590089 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6975a144-b433-427f-9319-27a9b81143ef-console-oauth-config\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.590242 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/360f6faf-c020-47cd-9b9e-3b931df6bf11-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.590527 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.591128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hksnt\" (UniqueName: \"kubernetes.io/projected/cac53ce4-90c0-4d12-8250-97e095faa921-kube-api-access-hksnt\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.591272 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-tmp-dir\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.591406 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.091384486 +0000 UTC m=+133.058030854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.591513 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmx45\" (UniqueName: \"kubernetes.io/projected/98aac6ae-e129-4ce6-9b45-3eb23232be7d-kube-api-access-wmx45\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2554r\" (UID: \"98aac6ae-e129-4ce6-9b45-3eb23232be7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.591834 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnwg\" (UniqueName: \"kubernetes.io/projected/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-kube-api-access-gmnwg\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.591912 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cecf671f-2c8e-4821-8047-f740b18c3d04-images\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.592056 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5af44a88-046f-4a49-aa06-a2cdf10eb333-tmpfs\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594191 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594253 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rbj\" (UniqueName: \"kubernetes.io/projected/708be0b2-c6b4-4167-a1cb-e71e5c078013-kube-api-access-j8rbj\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594287 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/68ccd36e-bf71-4a9b-93e5-8e972ecef049-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594328 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5af44a88-046f-4a49-aa06-a2cdf10eb333-srv-cert\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594357 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1a52a4e5-9502-4222-8090-3c18943abd74-tmp\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594421 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/360f6faf-c020-47cd-9b9e-3b931df6bf11-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594442 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-kube-api-access\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594473 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68ccd36e-bf71-4a9b-93e5-8e972ecef049-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594492 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-tmp-dir\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594513 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl56r\" (UniqueName: \"kubernetes.io/projected/6975a144-b433-427f-9319-27a9b81143ef-kube-api-access-cl56r\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594538 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cb0844-2028-4cfa-acba-18e5d2c57986-config\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594579 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-metrics-certs\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594621 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf79g\" (UniqueName: \"kubernetes.io/projected/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-kube-api-access-hf79g\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594685 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac53ce4-90c0-4d12-8250-97e095faa921-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594836 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-serving-cert\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.594866 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-555q7\" (UniqueName: \"kubernetes.io/projected/7d20929d-50ab-4bea-8fe0-c3963930537f-kube-api-access-555q7\") pod \"package-server-manager-77f986bd66-rzqzz\" (UID: \"7d20929d-50ab-4bea-8fe0-c3963930537f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.598343 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599069 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cecf671f-2c8e-4821-8047-f740b18c3d04-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599215 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bba37545-146a-4d15-8fc4-4a3c3ef1efab-serving-cert\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599255 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-service-ca-bundle\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599483 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5af44a88-046f-4a49-aa06-a2cdf10eb333-profile-collector-cert\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599567 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdqm\" (UniqueName: \"kubernetes.io/projected/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-kube-api-access-jmdqm\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599623 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8xw\" (UniqueName: \"kubernetes.io/projected/360f6faf-c020-47cd-9b9e-3b931df6bf11-kube-api-access-dn8xw\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599744 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsv7v\" (UniqueName: \"kubernetes.io/projected/bba37545-146a-4d15-8fc4-4a3c3ef1efab-kube-api-access-nsv7v\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599797 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599819 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dda1c305-da89-4c31-a229-073abe8757de-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.599853 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-trusted-ca-bundle\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.600343 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/708be0b2-c6b4-4167-a1cb-e71e5c078013-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.600386 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hftmd\" (UniqueName: \"kubernetes.io/projected/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-kube-api-access-hftmd\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.600874 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-serving-cert\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.600927 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8df7b2bf-ae29-417e-a699-a4d6140db6ff-available-featuregates\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.600965 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dda1c305-da89-4c31-a229-073abe8757de-tmpfs\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.600983 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-default-certificate\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.601018 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cb0844-2028-4cfa-acba-18e5d2c57986-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.601072 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.601160 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27cb0844-2028-4cfa-acba-18e5d2c57986-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.601524 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-stats-auth\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.601666 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6975a144-b433-427f-9319-27a9b81143ef-console-serving-cert\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.601720 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cecf671f-2c8e-4821-8047-f740b18c3d04-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.704512 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.704680 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.204647048 +0000 UTC m=+133.171293366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705107 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705142 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/021ddbaf-7df5-4911-afaa-609338cbcd9b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705169 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf5wx\" (UniqueName: \"kubernetes.io/projected/5457fc3a-6263-4957-9cc1-09d6364eba65-kube-api-access-rf5wx\") pod \"ingress-canary-xgf9n\" (UID: \"5457fc3a-6263-4957-9cc1-09d6364eba65\") " pod="openshift-ingress-canary/ingress-canary-xgf9n" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705196 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j8rbj\" (UniqueName: \"kubernetes.io/projected/708be0b2-c6b4-4167-a1cb-e71e5c078013-kube-api-access-j8rbj\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705235 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/68ccd36e-bf71-4a9b-93e5-8e972ecef049-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705260 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45c6feda-c272-4a12-b1fb-ad25af916694-images\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705299 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5af44a88-046f-4a49-aa06-a2cdf10eb333-srv-cert\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705326 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1a52a4e5-9502-4222-8090-3c18943abd74-tmp\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705355 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/360f6faf-c020-47cd-9b9e-3b931df6bf11-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705379 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-kube-api-access\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705415 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68ccd36e-bf71-4a9b-93e5-8e972ecef049-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705436 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-tmp-dir\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705460 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-plugins-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705488 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cl56r\" (UniqueName: \"kubernetes.io/projected/6975a144-b433-427f-9319-27a9b81143ef-kube-api-access-cl56r\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705512 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cb0844-2028-4cfa-acba-18e5d2c57986-config\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705535 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/021ddbaf-7df5-4911-afaa-609338cbcd9b-ready\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705561 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-metrics-certs\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705598 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hf79g\" (UniqueName: \"kubernetes.io/projected/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-kube-api-access-hf79g\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705623 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-csi-data-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705648 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac53ce4-90c0-4d12-8250-97e095faa921-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705722 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-serving-cert\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705748 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-555q7\" (UniqueName: \"kubernetes.io/projected/7d20929d-50ab-4bea-8fe0-c3963930537f-kube-api-access-555q7\") pod \"package-server-manager-77f986bd66-rzqzz\" (UID: \"7d20929d-50ab-4bea-8fe0-c3963930537f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705784 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705808 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cecf671f-2c8e-4821-8047-f740b18c3d04-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705831 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km9tb\" (UniqueName: \"kubernetes.io/projected/45c6feda-c272-4a12-b1fb-ad25af916694-kube-api-access-km9tb\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705854 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/470ce3a4-986e-4d2f-91a7-127e9d03d057-certs\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705876 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzjwh\" (UniqueName: \"kubernetes.io/projected/685c7729-e78d-4436-90ba-8e2097c0faac-kube-api-access-qzjwh\") pod \"multus-admission-controller-69db94689b-zt82k\" (UID: \"685c7729-e78d-4436-90ba-8e2097c0faac\") " pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705902 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bba37545-146a-4d15-8fc4-4a3c3ef1efab-serving-cert\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705922 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-service-ca-bundle\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.705987 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qqpf\" (UniqueName: \"kubernetes.io/projected/470ce3a4-986e-4d2f-91a7-127e9d03d057-kube-api-access-6qqpf\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.706014 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdwb\" (UniqueName: \"kubernetes.io/projected/a65dad46-b3c3-4025-9c41-acdb4c614e7f-kube-api-access-vvdwb\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.706040 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5af44a88-046f-4a49-aa06-a2cdf10eb333-profile-collector-cert\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.706065 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jmdqm\" (UniqueName: \"kubernetes.io/projected/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-kube-api-access-jmdqm\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.706138 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8xw\" (UniqueName: \"kubernetes.io/projected/360f6faf-c020-47cd-9b9e-3b931df6bf11-kube-api-access-dn8xw\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.706316 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652hl\" (UniqueName: \"kubernetes.io/projected/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-kube-api-access-652hl\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.706955 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/470ce3a4-986e-4d2f-91a7-127e9d03d057-node-bootstrap-token\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.706988 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-socket-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707009 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nsv7v\" (UniqueName: \"kubernetes.io/projected/bba37545-146a-4d15-8fc4-4a3c3ef1efab-kube-api-access-nsv7v\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707025 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dda1c305-da89-4c31-a229-073abe8757de-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707046 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t9x\" (UniqueName: \"kubernetes.io/projected/d1327638-8c00-4315-be3c-f9f8c70720d0-kube-api-access-z4t9x\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707073 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-trusted-ca-bundle\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707090 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-tmp-dir\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707135 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/708be0b2-c6b4-4167-a1cb-e71e5c078013-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707151 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hftmd\" (UniqueName: \"kubernetes.io/projected/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-kube-api-access-hftmd\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707170 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s6x2\" (UniqueName: \"kubernetes.io/projected/c7c22258-4003-4696-805b-422c06068fe9-kube-api-access-9s6x2\") pod \"migrator-866fcbc849-x2qdv\" (UID: \"c7c22258-4003-4696-805b-422c06068fe9\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707225 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-serving-cert\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707242 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8df7b2bf-ae29-417e-a699-a4d6140db6ff-available-featuregates\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707258 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dda1c305-da89-4c31-a229-073abe8757de-tmpfs\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707276 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-default-certificate\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707333 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cb0844-2028-4cfa-acba-18e5d2c57986-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707405 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khzhf\" (UniqueName: \"kubernetes.io/projected/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-kube-api-access-khzhf\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707688 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.708399 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.707773 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27cb0844-2028-4cfa-acba-18e5d2c57986-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.708620 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-config-volume\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.708650 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-metrics-tls\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.708667 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a65dad46-b3c3-4025-9c41-acdb4c614e7f-tmpfs\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.708683 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a65dad46-b3c3-4025-9c41-acdb4c614e7f-apiservice-cert\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709610 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1327638-8c00-4315-be3c-f9f8c70720d0-config-volume\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709640 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-stats-auth\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709658 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6975a144-b433-427f-9319-27a9b81143ef-console-serving-cert\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709677 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cecf671f-2c8e-4821-8047-f740b18c3d04-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-config\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709732 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-console-config\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709756 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-service-ca\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709773 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-oauth-serving-cert\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709805 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dda1c305-da89-4c31-a229-073abe8757de-srv-cert\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709821 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-registration-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709849 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8df7b2bf-ae29-417e-a699-a4d6140db6ff-serving-cert\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709869 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/98aac6ae-e129-4ce6-9b45-3eb23232be7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2554r\" (UID: \"98aac6ae-e129-4ce6-9b45-3eb23232be7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709891 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mgb82\" (UniqueName: \"kubernetes.io/projected/cecf671f-2c8e-4821-8047-f740b18c3d04-kube-api-access-mgb82\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709907 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cb0844-2028-4cfa-acba-18e5d2c57986-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.709923 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a65dad46-b3c3-4025-9c41-acdb4c614e7f-webhook-cert\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.710003 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/685c7729-e78d-4436-90ba-8e2097c0faac-webhook-certs\") pod \"multus-admission-controller-69db94689b-zt82k\" (UID: \"685c7729-e78d-4436-90ba-8e2097c0faac\") " pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.710066 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xk6j\" (UniqueName: \"kubernetes.io/projected/1972f121-c7ba-4edb-817f-093975dff371-kube-api-access-9xk6j\") pod \"downloads-747b44746d-6vnnq\" (UID: \"1972f121-c7ba-4edb-817f-093975dff371\") " pod="openshift-console/downloads-747b44746d-6vnnq" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.710088 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82mjr\" (UniqueName: \"kubernetes.io/projected/dda1c305-da89-4c31-a229-073abe8757de-kube-api-access-82mjr\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.710973 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-service-ca\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711089 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9drm\" (UniqueName: \"kubernetes.io/projected/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-kube-api-access-p9drm\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711169 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-trusted-ca\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711220 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-metrics-tls\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711252 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ccd36e-bf71-4a9b-93e5-8e972ecef049-config\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711304 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-config\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711362 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nn66d\" (UniqueName: \"kubernetes.io/projected/8df7b2bf-ae29-417e-a699-a4d6140db6ff-kube-api-access-nn66d\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711395 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711419 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/360f6faf-c020-47cd-9b9e-3b931df6bf11-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711444 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d20929d-50ab-4bea-8fe0-c3963930537f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-rzqzz\" (UID: \"7d20929d-50ab-4bea-8fe0-c3963930537f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711478 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7bxd\" (UniqueName: \"kubernetes.io/projected/1a52a4e5-9502-4222-8090-3c18943abd74-kube-api-access-r7bxd\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711517 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/708be0b2-c6b4-4167-a1cb-e71e5c078013-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711541 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-signing-key\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711596 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba37545-146a-4d15-8fc4-4a3c3ef1efab-config\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.711644 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac53ce4-90c0-4d12-8250-97e095faa921-config\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.712006 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-config\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.712089 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5457fc3a-6263-4957-9cc1-09d6364eba65-cert\") pod \"ingress-canary-xgf9n\" (UID: \"5457fc3a-6263-4957-9cc1-09d6364eba65\") " pod="openshift-ingress-canary/ingress-canary-xgf9n" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.712160 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/021ddbaf-7df5-4911-afaa-609338cbcd9b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.712197 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-signing-cabundle\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.712238 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zpssn\" (UniqueName: \"kubernetes.io/projected/5af44a88-046f-4a49-aa06-a2cdf10eb333-kube-api-access-zpssn\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.712265 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45c6feda-c272-4a12-b1fb-ad25af916694-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.712308 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ccd36e-bf71-4a9b-93e5-8e972ecef049-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6975a144-b433-427f-9319-27a9b81143ef-console-oauth-config\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713109 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/360f6faf-c020-47cd-9b9e-3b931df6bf11-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713178 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c6feda-c272-4a12-b1fb-ad25af916694-config\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713225 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713256 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-mountpoint-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713325 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hksnt\" (UniqueName: \"kubernetes.io/projected/cac53ce4-90c0-4d12-8250-97e095faa921-kube-api-access-hksnt\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713364 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-tmp-dir\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713392 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmx45\" (UniqueName: \"kubernetes.io/projected/98aac6ae-e129-4ce6-9b45-3eb23232be7d-kube-api-access-wmx45\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2554r\" (UID: \"98aac6ae-e129-4ce6-9b45-3eb23232be7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713419 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1327638-8c00-4315-be3c-f9f8c70720d0-secret-volume\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.713469 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gmnwg\" (UniqueName: \"kubernetes.io/projected/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-kube-api-access-gmnwg\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.714618 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cecf671f-2c8e-4821-8047-f740b18c3d04-images\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.714839 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-signing-cabundle\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.714886 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-oauth-serving-cert\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.715821 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.717197 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-config\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.717448 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8df7b2bf-ae29-417e-a699-a4d6140db6ff-available-featuregates\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.718798 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8df7b2bf-ae29-417e-a699-a4d6140db6ff-serving-cert\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.719127 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27cb0844-2028-4cfa-acba-18e5d2c57986-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.720600 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cecf671f-2c8e-4821-8047-f740b18c3d04-images\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.722645 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-serving-cert\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.722952 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-serving-cert\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.716304 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dda1c305-da89-4c31-a229-073abe8757de-tmpfs\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.717407 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.726269 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/708be0b2-c6b4-4167-a1cb-e71e5c078013-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.726685 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-metrics-certs\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.726880 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/68ccd36e-bf71-4a9b-93e5-8e972ecef049-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.727342 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d20929d-50ab-4bea-8fe0-c3963930537f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-rzqzz\" (UID: \"7d20929d-50ab-4bea-8fe0-c3963930537f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.727628 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cb0844-2028-4cfa-acba-18e5d2c57986-config\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.728241 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1a52a4e5-9502-4222-8090-3c18943abd74-tmp\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.728671 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/360f6faf-c020-47cd-9b9e-3b931df6bf11-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.728711 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-trusted-ca-bundle\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.728804 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac53ce4-90c0-4d12-8250-97e095faa921-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.729311 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-service-ca-bundle\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.729394 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-tmp-dir\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.729603 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.229586031 +0000 UTC m=+133.196232399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.730214 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5af44a88-046f-4a49-aa06-a2cdf10eb333-srv-cert\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.731075 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5af44a88-046f-4a49-aa06-a2cdf10eb333-tmpfs\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.731275 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5t8l\" (UniqueName: \"kubernetes.io/projected/021ddbaf-7df5-4911-afaa-609338cbcd9b-kube-api-access-k5t8l\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.732338 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5af44a88-046f-4a49-aa06-a2cdf10eb333-tmpfs\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.732802 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-config\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.733717 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cecf671f-2c8e-4821-8047-f740b18c3d04-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.733755 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/27cb0844-2028-4cfa-acba-18e5d2c57986-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.733879 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-stats-auth\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.734718 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6975a144-b433-427f-9319-27a9b81143ef-console-oauth-config\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.734759 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-trusted-ca\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.736042 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac53ce4-90c0-4d12-8250-97e095faa921-config\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.736577 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6975a144-b433-427f-9319-27a9b81143ef-console-config\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.737048 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dda1c305-da89-4c31-a229-073abe8757de-srv-cert\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.737308 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/98aac6ae-e129-4ce6-9b45-3eb23232be7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2554r\" (UID: \"98aac6ae-e129-4ce6-9b45-3eb23232be7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.737551 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/360f6faf-c020-47cd-9b9e-3b931df6bf11-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.737707 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-config\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.737533 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-tmp-dir\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.737983 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-default-certificate\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.738696 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/708be0b2-c6b4-4167-a1cb-e71e5c078013-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.740529 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ccd36e-bf71-4a9b-93e5-8e972ecef049-config\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.741907 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.742863 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cecf671f-2c8e-4821-8047-f740b18c3d04-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.745604 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba37545-146a-4d15-8fc4-4a3c3ef1efab-config\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.745732 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5af44a88-046f-4a49-aa06-a2cdf10eb333-profile-collector-cert\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.746138 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bba37545-146a-4d15-8fc4-4a3c3ef1efab-serving-cert\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.747561 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-signing-key\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.748348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68ccd36e-bf71-4a9b-93e5-8e972ecef049-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.748972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dda1c305-da89-4c31-a229-073abe8757de-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.749519 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-metrics-tls\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.749553 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6975a144-b433-427f-9319-27a9b81143ef-console-serving-cert\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.754203 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmdqm\" (UniqueName: \"kubernetes.io/projected/1946b030-f3bc-41e1-b611-a2dcb84a9d2d-kube-api-access-jmdqm\") pod \"console-operator-67c89758df-kv2lw\" (UID: \"1946b030-f3bc-41e1-b611-a2dcb84a9d2d\") " pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.756839 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8xw\" (UniqueName: \"kubernetes.io/projected/360f6faf-c020-47cd-9b9e-3b931df6bf11-kube-api-access-dn8xw\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.778532 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8rbj\" (UniqueName: \"kubernetes.io/projected/708be0b2-c6b4-4167-a1cb-e71e5c078013-kube-api-access-j8rbj\") pod \"machine-config-controller-f9cdd68f7-2d228\" (UID: \"708be0b2-c6b4-4167-a1cb-e71e5c078013\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.787517 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.798984 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn66d\" (UniqueName: \"kubernetes.io/projected/8df7b2bf-ae29-417e-a699-a4d6140db6ff-kube-api-access-nn66d\") pod \"openshift-config-operator-5777786469-w5c5q\" (UID: \"8df7b2bf-ae29-417e-a699-a4d6140db6ff\") " pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.818852 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgb82\" (UniqueName: \"kubernetes.io/projected/cecf671f-2c8e-4821-8047-f740b18c3d04-kube-api-access-mgb82\") pod \"machine-config-operator-67c9d58cbb-khg6s\" (UID: \"cecf671f-2c8e-4821-8047-f740b18c3d04\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.827861 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g"] Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832324 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832623 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-registration-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832665 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a65dad46-b3c3-4025-9c41-acdb4c614e7f-webhook-cert\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832690 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/685c7729-e78d-4436-90ba-8e2097c0faac-webhook-certs\") pod \"multus-admission-controller-69db94689b-zt82k\" (UID: \"685c7729-e78d-4436-90ba-8e2097c0faac\") " pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832752 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5457fc3a-6263-4957-9cc1-09d6364eba65-cert\") pod \"ingress-canary-xgf9n\" (UID: \"5457fc3a-6263-4957-9cc1-09d6364eba65\") " pod="openshift-ingress-canary/ingress-canary-xgf9n" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/021ddbaf-7df5-4911-afaa-609338cbcd9b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832812 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45c6feda-c272-4a12-b1fb-ad25af916694-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832852 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c6feda-c272-4a12-b1fb-ad25af916694-config\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-mountpoint-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832910 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1327638-8c00-4315-be3c-f9f8c70720d0-secret-volume\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.832976 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5t8l\" (UniqueName: \"kubernetes.io/projected/021ddbaf-7df5-4911-afaa-609338cbcd9b-kube-api-access-k5t8l\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833003 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/021ddbaf-7df5-4911-afaa-609338cbcd9b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833024 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rf5wx\" (UniqueName: \"kubernetes.io/projected/5457fc3a-6263-4957-9cc1-09d6364eba65-kube-api-access-rf5wx\") pod \"ingress-canary-xgf9n\" (UID: \"5457fc3a-6263-4957-9cc1-09d6364eba65\") " pod="openshift-ingress-canary/ingress-canary-xgf9n" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45c6feda-c272-4a12-b1fb-ad25af916694-images\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833093 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-plugins-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833118 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/021ddbaf-7df5-4911-afaa-609338cbcd9b-ready\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-csi-data-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833207 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-km9tb\" (UniqueName: \"kubernetes.io/projected/45c6feda-c272-4a12-b1fb-ad25af916694-kube-api-access-km9tb\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833227 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/470ce3a4-986e-4d2f-91a7-127e9d03d057-certs\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833249 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzjwh\" (UniqueName: \"kubernetes.io/projected/685c7729-e78d-4436-90ba-8e2097c0faac-kube-api-access-qzjwh\") pod \"multus-admission-controller-69db94689b-zt82k\" (UID: \"685c7729-e78d-4436-90ba-8e2097c0faac\") " pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833271 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qqpf\" (UniqueName: \"kubernetes.io/projected/470ce3a4-986e-4d2f-91a7-127e9d03d057-kube-api-access-6qqpf\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833289 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdwb\" (UniqueName: \"kubernetes.io/projected/a65dad46-b3c3-4025-9c41-acdb4c614e7f-kube-api-access-vvdwb\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833323 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-652hl\" (UniqueName: \"kubernetes.io/projected/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-kube-api-access-652hl\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833353 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/470ce3a4-986e-4d2f-91a7-127e9d03d057-node-bootstrap-token\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833374 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-socket-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833399 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4t9x\" (UniqueName: \"kubernetes.io/projected/d1327638-8c00-4315-be3c-f9f8c70720d0-kube-api-access-z4t9x\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833423 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-tmp-dir\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833474 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9s6x2\" (UniqueName: \"kubernetes.io/projected/c7c22258-4003-4696-805b-422c06068fe9-kube-api-access-9s6x2\") pod \"migrator-866fcbc849-x2qdv\" (UID: \"c7c22258-4003-4696-805b-422c06068fe9\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833509 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-khzhf\" (UniqueName: \"kubernetes.io/projected/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-kube-api-access-khzhf\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833533 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-config-volume\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-metrics-tls\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833575 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a65dad46-b3c3-4025-9c41-acdb4c614e7f-tmpfs\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833595 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a65dad46-b3c3-4025-9c41-acdb4c614e7f-apiservice-cert\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.833629 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1327638-8c00-4315-be3c-f9f8c70720d0-config-volume\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.834666 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1327638-8c00-4315-be3c-f9f8c70720d0-config-volume\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.834792 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-csi-data-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.834886 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.334865431 +0000 UTC m=+133.301511749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.835181 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-registration-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.835256 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-mountpoint-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.835460 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/021ddbaf-7df5-4911-afaa-609338cbcd9b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.835707 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-tmp-dir\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.836171 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-plugins-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.836429 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-socket-dir\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.836527 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45c6feda-c272-4a12-b1fb-ad25af916694-images\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.837136 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a65dad46-b3c3-4025-9c41-acdb4c614e7f-tmpfs\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.837312 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/021ddbaf-7df5-4911-afaa-609338cbcd9b-ready\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.837503 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c6feda-c272-4a12-b1fb-ad25af916694-config\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.838271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-config-volume\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.838287 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/021ddbaf-7df5-4911-afaa-609338cbcd9b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.838483 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xk6j\" (UniqueName: \"kubernetes.io/projected/1972f121-c7ba-4edb-817f-093975dff371-kube-api-access-9xk6j\") pod \"downloads-747b44746d-6vnnq\" (UID: \"1972f121-c7ba-4edb-817f-093975dff371\") " pod="openshift-console/downloads-747b44746d-6vnnq" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.838919 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/470ce3a4-986e-4d2f-91a7-127e9d03d057-certs\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.839474 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a65dad46-b3c3-4025-9c41-acdb4c614e7f-webhook-cert\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.840392 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5457fc3a-6263-4957-9cc1-09d6364eba65-cert\") pod \"ingress-canary-xgf9n\" (UID: \"5457fc3a-6263-4957-9cc1-09d6364eba65\") " pod="openshift-ingress-canary/ingress-canary-xgf9n" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.841378 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45c6feda-c272-4a12-b1fb-ad25af916694-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.841951 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-metrics-tls\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.841979 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1327638-8c00-4315-be3c-f9f8c70720d0-secret-volume\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.842787 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a65dad46-b3c3-4025-9c41-acdb4c614e7f-apiservice-cert\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.844719 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/470ce3a4-986e-4d2f-91a7-127e9d03d057-node-bootstrap-token\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.843777 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/685c7729-e78d-4436-90ba-8e2097c0faac-webhook-certs\") pod \"multus-admission-controller-69db94689b-zt82k\" (UID: \"685c7729-e78d-4436-90ba-8e2097c0faac\") " pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" Feb 19 00:11:13 crc kubenswrapper[5108]: W0219 00:11:13.845458 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9f121a3_1529_44dd_b4b4_18165c6865b0.slice/crio-872689d8897b5dc4272b9687b1e03ce72887ce8727bbfdc17e63b51f1de33292 WatchSource:0}: Error finding container 872689d8897b5dc4272b9687b1e03ce72887ce8727bbfdc17e63b51f1de33292: Status 404 returned error can't find the container with id 872689d8897b5dc4272b9687b1e03ce72887ce8727bbfdc17e63b51f1de33292 Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.884638 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl56r\" (UniqueName: \"kubernetes.io/projected/6975a144-b433-427f-9319-27a9b81143ef-kube-api-access-cl56r\") pod \"console-64d44f6ddf-9dxbw\" (UID: \"6975a144-b433-427f-9319-27a9b81143ef\") " pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.896111 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82mjr\" (UniqueName: \"kubernetes.io/projected/dda1c305-da89-4c31-a229-073abe8757de-kube-api-access-82mjr\") pod \"catalog-operator-75ff9f647d-nnwxr\" (UID: \"dda1c305-da89-4c31-a229-073abe8757de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.919628 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hftmd\" (UniqueName: \"kubernetes.io/projected/b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0-kube-api-access-hftmd\") pod \"service-ca-74545575db-7f8nt\" (UID: \"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0\") " pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.934694 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:13 crc kubenswrapper[5108]: E0219 00:11:13.935073 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.435061266 +0000 UTC m=+133.401707574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.935488 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9drm\" (UniqueName: \"kubernetes.io/projected/99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f-kube-api-access-p9drm\") pod \"router-default-68cf44c8b8-n8lfg\" (UID: \"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f\") " pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.958625 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/360f6faf-c020-47cd-9b9e-3b931df6bf11-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-t8zzc\" (UID: \"360f6faf-c020-47cd-9b9e-3b931df6bf11\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.958891 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.965817 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-6vnnq" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.972789 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.976893 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3-kube-api-access\") pod \"kube-apiserver-operator-575994946d-kxf7j\" (UID: \"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:13 crc kubenswrapper[5108]: I0219 00:11:13.981306 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.001351 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.003709 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7bxd\" (UniqueName: \"kubernetes.io/projected/1a52a4e5-9502-4222-8090-3c18943abd74-kube-api-access-r7bxd\") pod \"marketplace-operator-547dbd544d-k745b\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.006986 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.013509 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.016980 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27cb0844-2028-4cfa-acba-18e5d2c57986-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-z6xts\" (UID: \"27cb0844-2028-4cfa-acba-18e5d2c57986\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.021135 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.028812 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.035728 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.036412 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.53639216 +0000 UTC m=+133.503038468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.042014 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf79g\" (UniqueName: \"kubernetes.io/projected/4c8e3873-4ad3-41bf-b79b-ab2730ea58be-kube-api-access-hf79g\") pod \"openshift-controller-manager-operator-686468bdd5-47dzx\" (UID: \"4c8e3873-4ad3-41bf-b79b-ab2730ea58be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.056204 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpssn\" (UniqueName: \"kubernetes.io/projected/5af44a88-046f-4a49-aa06-a2cdf10eb333-kube-api-access-zpssn\") pod \"olm-operator-5cdf44d969-w7rrn\" (UID: \"5af44a88-046f-4a49-aa06-a2cdf10eb333\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.063739 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.077153 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hksnt\" (UniqueName: \"kubernetes.io/projected/cac53ce4-90c0-4d12-8250-97e095faa921-kube-api-access-hksnt\") pod \"kube-storage-version-migrator-operator-565b79b866-n2fr8\" (UID: \"cac53ce4-90c0-4d12-8250-97e095faa921\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.096678 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-7f8nt" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.100600 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68ccd36e-bf71-4a9b-93e5-8e972ecef049-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-xkb77\" (UID: \"68ccd36e-bf71-4a9b-93e5-8e972ecef049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.106579 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.127413 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmx45\" (UniqueName: \"kubernetes.io/projected/98aac6ae-e129-4ce6-9b45-3eb23232be7d-kube-api-access-wmx45\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2554r\" (UID: \"98aac6ae-e129-4ce6-9b45-3eb23232be7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.138360 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.140906 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.64088896 +0000 UTC m=+133.607535258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.145748 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-555q7\" (UniqueName: \"kubernetes.io/projected/7d20929d-50ab-4bea-8fe0-c3963930537f-kube-api-access-555q7\") pod \"package-server-manager-77f986bd66-rzqzz\" (UID: \"7d20929d-50ab-4bea-8fe0-c3963930537f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.159535 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsv7v\" (UniqueName: \"kubernetes.io/projected/bba37545-146a-4d15-8fc4-4a3c3ef1efab-kube-api-access-nsv7v\") pod \"service-ca-operator-5b9c976747-8fkns\" (UID: \"bba37545-146a-4d15-8fc4-4a3c3ef1efab\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.184574 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmnwg\" (UniqueName: \"kubernetes.io/projected/17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e-kube-api-access-gmnwg\") pod \"dns-operator-799b87ffcd-dnr7x\" (UID: \"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.202221 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-km9tb\" (UniqueName: \"kubernetes.io/projected/45c6feda-c272-4a12-b1fb-ad25af916694-kube-api-access-km9tb\") pod \"machine-api-operator-755bb95488-dtlcj\" (UID: \"45c6feda-c272-4a12-b1fb-ad25af916694\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.210259 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.213874 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.241089 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5t8l\" (UniqueName: \"kubernetes.io/projected/021ddbaf-7df5-4911-afaa-609338cbcd9b-kube-api-access-k5t8l\") pod \"cni-sysctl-allowlist-ds-dhrv8\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.241438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.241553 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.741515965 +0000 UTC m=+133.708162273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.242135 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.242713 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.742699516 +0000 UTC m=+133.709345814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.249519 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-6vnnq"] Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.259821 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4t9x\" (UniqueName: \"kubernetes.io/projected/d1327638-8c00-4315-be3c-f9f8c70720d0-kube-api-access-z4t9x\") pod \"collect-profiles-29524320-h7njr\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.266596 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf5wx\" (UniqueName: \"kubernetes.io/projected/5457fc3a-6263-4957-9cc1-09d6364eba65-kube-api-access-rf5wx\") pod \"ingress-canary-xgf9n\" (UID: \"5457fc3a-6263-4957-9cc1-09d6364eba65\") " pod="openshift-ingress-canary/ingress-canary-xgf9n" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.278367 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s6x2\" (UniqueName: \"kubernetes.io/projected/c7c22258-4003-4696-805b-422c06068fe9-kube-api-access-9s6x2\") pod \"migrator-866fcbc849-x2qdv\" (UID: \"c7c22258-4003-4696-805b-422c06068fe9\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.287813 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.296302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.308710 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-khzhf\" (UniqueName: \"kubernetes.io/projected/d08b33e3-428b-460b-b3ff-56ffbf1c68f2-kube-api-access-khzhf\") pod \"csi-hostpathplugin-4lmnf\" (UID: \"d08b33e3-428b-460b-b3ff-56ffbf1c68f2\") " pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.334827 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.343778 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.344562 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.844538125 +0000 UTC m=+133.811184433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.346328 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qqpf\" (UniqueName: \"kubernetes.io/projected/470ce3a4-986e-4d2f-91a7-127e9d03d057-kube-api-access-6qqpf\") pod \"machine-config-server-zcpqk\" (UID: \"470ce3a4-986e-4d2f-91a7-127e9d03d057\") " pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.356902 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-652hl\" (UniqueName: \"kubernetes.io/projected/dd0f0187-4ee8-4a18-bbd2-578a4831e5e5-kube-api-access-652hl\") pod \"dns-default-zxqlg\" (UID: \"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5\") " pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.357218 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.371998 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdwb\" (UniqueName: \"kubernetes.io/projected/a65dad46-b3c3-4025-9c41-acdb4c614e7f-kube-api-access-vvdwb\") pod \"packageserver-7d4fc7d867-ffsjp\" (UID: \"a65dad46-b3c3-4025-9c41-acdb4c614e7f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.372214 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" Feb 19 00:11:14 crc kubenswrapper[5108]: W0219 00:11:14.375143 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1972f121_c7ba_4edb_817f_093975dff371.slice/crio-c8e1f22107cb9342e0fe213b6d06607fb7e49e13cca84f848bae67eead04fdd1 WatchSource:0}: Error finding container c8e1f22107cb9342e0fe213b6d06607fb7e49e13cca84f848bae67eead04fdd1: Status 404 returned error can't find the container with id c8e1f22107cb9342e0fe213b6d06607fb7e49e13cca84f848bae67eead04fdd1 Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.382332 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.395509 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzjwh\" (UniqueName: \"kubernetes.io/projected/685c7729-e78d-4436-90ba-8e2097c0faac-kube-api-access-qzjwh\") pod \"multus-admission-controller-69db94689b-zt82k\" (UID: \"685c7729-e78d-4436-90ba-8e2097c0faac\") " pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.388597 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.412428 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.423466 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.433170 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.441193 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.445352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.445722 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:14.945710415 +0000 UTC m=+133.912356723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.470575 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.492161 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.502076 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xgf9n" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.518318 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-kv2lw"] Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.518588 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.524986 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" podStartSLOduration=112.524965853 podStartE2EDuration="1m52.524965853s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.521784289 +0000 UTC m=+133.488430617" watchObservedRunningTime="2026-02-19 00:11:14.524965853 +0000 UTC m=+133.491612191" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.527061 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.534412 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zcpqk" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.539551 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228"] Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.546602 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.547015 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.046997329 +0000 UTC m=+134.013643637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.589383 5108 generic.go:358] "Generic (PLEG): container finished" podID="347e23fe-fd18-4ee1-a333-1302eefd97e8" containerID="73beccdc37849c1b4f6b87aaf826b9d1360281f6ae7bbc1479eef09fe49d13c7" exitCode=0 Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.589476 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" event={"ID":"347e23fe-fd18-4ee1-a333-1302eefd97e8","Type":"ContainerDied","Data":"73beccdc37849c1b4f6b87aaf826b9d1360281f6ae7bbc1479eef09fe49d13c7"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.589521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" event={"ID":"347e23fe-fd18-4ee1-a333-1302eefd97e8","Type":"ContainerStarted","Data":"8af20251cfe780d3131718b13f4dc49692a9d2b73c8c43a0ee4f7911171c9be6"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.594036 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" event={"ID":"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f","Type":"ContainerStarted","Data":"f2d2b0cf7811819945a827d73815cec23b53e039bae922f98f97d1e83fe35428"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.604281 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" event={"ID":"da814e69-94f0-4857-92fe-048de6d4b60d","Type":"ContainerStarted","Data":"c1ff5e16179afddbd87ce4f33e5a3199f92c0fcdbe0c976b8e0af12b31998ac6"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.604331 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" event={"ID":"da814e69-94f0-4857-92fe-048de6d4b60d","Type":"ContainerStarted","Data":"c76a053e1639bb155ad92741c224e61fcf86c5dd5c78c9e0a5ec7a69e9c4f639"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.606683 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" event={"ID":"ff64b385-3fa7-412e-8e7c-a465f30f98e3","Type":"ContainerStarted","Data":"d1e90da22f79f2ccbd4f4bf1fae6b3311b3d9b34e2f4671707e48f168f8fc56f"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.606727 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" event={"ID":"ff64b385-3fa7-412e-8e7c-a465f30f98e3","Type":"ContainerStarted","Data":"c45d38af76d8c2f3c326ae2cc54b3458e3a82657c4ee349a28e183973816e531"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.611596 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" event={"ID":"b9f121a3-1529-44dd-b4b4-18165c6865b0","Type":"ContainerStarted","Data":"3f41e08499633cfdc675f68cf35de686db3e35dead9f2187af5cdde383ba1469"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.611739 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" event={"ID":"b9f121a3-1529-44dd-b4b4-18165c6865b0","Type":"ContainerStarted","Data":"872689d8897b5dc4272b9687b1e03ce72887ce8727bbfdc17e63b51f1de33292"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.613210 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-8hsrp" event={"ID":"76d1bae7-e54a-44be-9688-fcce4fd96146","Type":"ContainerStarted","Data":"69c084db4639457bc94d18d4674ef4fb61122151a79f84ce0b9fd0597b5573c4"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.632561 5108 generic.go:358] "Generic (PLEG): container finished" podID="960bf537-20fc-4209-b634-54e0046436b3" containerID="b60a494e394aa5134e652b571ada2d5e775d1b18b6dcc937ecd8cde9796b3e0a" exitCode=0 Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.632700 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" event={"ID":"960bf537-20fc-4209-b634-54e0046436b3","Type":"ContainerDied","Data":"b60a494e394aa5134e652b571ada2d5e775d1b18b6dcc937ecd8cde9796b3e0a"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.639766 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-6vnnq" event={"ID":"1972f121-c7ba-4edb-817f-093975dff371","Type":"ContainerStarted","Data":"c8e1f22107cb9342e0fe213b6d06607fb7e49e13cca84f848bae67eead04fdd1"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.644535 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" event={"ID":"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa","Type":"ContainerStarted","Data":"b95d10186603f9bd3e88bf8fbe7820bfdf0c4a73e3bf764b2838f5e7f7bbe929"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.644581 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" event={"ID":"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa","Type":"ContainerStarted","Data":"ce9f9c47ca1d30d0b79b96fc6d826b921abc596358b1ebc498832bdf6c947702"} Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.652425 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.652741 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.152727451 +0000 UTC m=+134.119373759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.654018 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s"] Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.677522 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" podStartSLOduration=112.67750059 podStartE2EDuration="1m52.67750059s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.676313558 +0000 UTC m=+133.642959866" watchObservedRunningTime="2026-02-19 00:11:14.67750059 +0000 UTC m=+133.644146898" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.756051 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.757789 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.257757414 +0000 UTC m=+134.224403722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.859056 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.860670 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.36065549 +0000 UTC m=+134.327301798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.874899 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.917362 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.955236 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29524320-mpp5j" podStartSLOduration=112.955218135 podStartE2EDuration="1m52.955218135s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.954790304 +0000 UTC m=+133.921436612" watchObservedRunningTime="2026-02-19 00:11:14.955218135 +0000 UTC m=+133.921864443" Feb 19 00:11:14 crc kubenswrapper[5108]: I0219 00:11:14.961177 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:14 crc kubenswrapper[5108]: E0219 00:11:14.962074 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.462052167 +0000 UTC m=+134.428698475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.038683 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" podStartSLOduration=113.038662354 podStartE2EDuration="1m53.038662354s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:15.036429554 +0000 UTC m=+134.003075862" watchObservedRunningTime="2026-02-19 00:11:15.038662354 +0000 UTC m=+134.005308662" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.063778 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.064088 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.56407441 +0000 UTC m=+134.530720708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.165351 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.165503 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.665475386 +0000 UTC m=+134.632121694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.166006 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.166354 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.666348059 +0000 UTC m=+134.632994357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.270868 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.271577 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.771554447 +0000 UTC m=+134.738200755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.318904 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" podStartSLOduration=112.318884816 podStartE2EDuration="1m52.318884816s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:15.272312218 +0000 UTC m=+134.238958546" watchObservedRunningTime="2026-02-19 00:11:15.318884816 +0000 UTC m=+134.285531124" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.372961 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.373298 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.873286203 +0000 UTC m=+134.839932511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.474511 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.474924 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:15.974903575 +0000 UTC m=+134.941549883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.476356 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.511977 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gp96k" podStartSLOduration=113.511961831 podStartE2EDuration="1m53.511961831s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:15.510868471 +0000 UTC m=+134.477514779" watchObservedRunningTime="2026-02-19 00:11:15.511961831 +0000 UTC m=+134.478608139" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.538898 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-w5c5q"] Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.554916 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc"] Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.583881 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.584207 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.084194451 +0000 UTC m=+135.050840759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.649025 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-hhd9x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded" start-of-body= Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.649431 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" podUID="b5775541-9300-4451-95dd-cb81bd25dd50" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.662780 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" event={"ID":"cecf671f-2c8e-4821-8047-f740b18c3d04","Type":"ContainerStarted","Data":"348ea33564ce04194ecc0b92e1e5682e7ab688b3bdfa20578693b8e24e0e1dbb"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.662835 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" event={"ID":"cecf671f-2c8e-4821-8047-f740b18c3d04","Type":"ContainerStarted","Data":"a43c9d488b631ca646dd9d82bc8f47cd3fd7289df15751be104f5a66c5fb90cb"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.669971 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zcpqk" event={"ID":"470ce3a4-986e-4d2f-91a7-127e9d03d057","Type":"ContainerStarted","Data":"729e18297c6f8714588aff7bd7635bd89eb906583eea549d8deabef9c24b6640"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.682356 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" event={"ID":"960bf537-20fc-4209-b634-54e0046436b3","Type":"ContainerStarted","Data":"7bb72018e837159fd08c5240c78fd7322ae2028266b1825d0453d2da3abf1bd3"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.691981 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.692629 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.192610915 +0000 UTC m=+135.159257223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.710628 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-kv2lw" event={"ID":"1946b030-f3bc-41e1-b611-a2dcb84a9d2d","Type":"ContainerStarted","Data":"b5e5d726013fae499c064fe309a8445cc406d65e66533f751fbdd6b43465fecc"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.710673 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-kv2lw" event={"ID":"1946b030-f3bc-41e1-b611-a2dcb84a9d2d","Type":"ContainerStarted","Data":"a99a1d8f6e9899eceb004a8513f65b05d3271b46a859143ca12e582f92c7d8f2"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.711236 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.725091 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-6vnnq" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.726731 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" event={"ID":"a5fa619f-1ea3-4237-8fbe-6b1a821d5bfa","Type":"ContainerStarted","Data":"f2378e3a269cbf25b283b510f21e4f8deff1a3d4b0d61c0df2b3a4cff98f03c1"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.733579 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" event={"ID":"347e23fe-fd18-4ee1-a333-1302eefd97e8","Type":"ContainerStarted","Data":"af58f3dcd4e4737955eed12fbef22c729ad527edc08e322360da9e3f5d5a3b63"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.737901 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" event={"ID":"99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f","Type":"ContainerStarted","Data":"793d488b92e41930a1fb81adcb41f5d5ad24364ceeebd123d2eb55ea16121a01"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.739964 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" event={"ID":"708be0b2-c6b4-4167-a1cb-e71e5c078013","Type":"ContainerStarted","Data":"894d916e4ead714aeb20c901230e795b8040d704b8f8b33f1dd170a12963a9de"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.740006 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" event={"ID":"708be0b2-c6b4-4167-a1cb-e71e5c078013","Type":"ContainerStarted","Data":"ef7875958913835b9b84c8a7b48613c08f9f078d7d28ac726b109cba606b6fde"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.744423 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" event={"ID":"021ddbaf-7df5-4911-afaa-609338cbcd9b","Type":"ContainerStarted","Data":"5155ae8e7666412301b1d23cd2cde74c8fd35b9aaa92de011167d630877b2672"} Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.744627 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.794480 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.795056 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.295037758 +0000 UTC m=+135.261684066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.852331 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-kv2lw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.852410 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-6vnnq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.852409 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-kv2lw" podUID="1946b030-f3bc-41e1-b611-a2dcb84a9d2d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.852460 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-6vnnq" podUID="1972f121-c7ba-4edb-817f-093975dff371" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 19 00:11:15 crc kubenswrapper[5108]: I0219 00:11:15.895950 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:15 crc kubenswrapper[5108]: E0219 00:11:15.905047 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.405009113 +0000 UTC m=+135.371655431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.010154 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.010654 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.510634242 +0000 UTC m=+135.477280550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.021970 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.023365 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:16 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:16 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:16 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.023409 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.068671 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.071246 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ghtdz" podStartSLOduration=114.071215613 podStartE2EDuration="1m54.071215613s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.061045572 +0000 UTC m=+135.027691880" watchObservedRunningTime="2026-02-19 00:11:16.071215613 +0000 UTC m=+135.037861931" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.113894 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.114353 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.614312839 +0000 UTC m=+135.580959147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.114984 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.156134 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-6vnnq" podStartSLOduration=114.156119521 podStartE2EDuration="1m54.156119521s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.102880615 +0000 UTC m=+135.069526923" watchObservedRunningTime="2026-02-19 00:11:16.156119521 +0000 UTC m=+135.122765829" Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.184704 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.684636079 +0000 UTC m=+135.651282397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.202128 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.202731 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" podStartSLOduration=114.20272212 podStartE2EDuration="1m54.20272212s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.1553553 +0000 UTC m=+135.122001608" watchObservedRunningTime="2026-02-19 00:11:16.20272212 +0000 UTC m=+135.169368418" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.223546 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.223976 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.723921023 +0000 UTC m=+135.690567331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.234067 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rb48" podStartSLOduration=114.234045564 podStartE2EDuration="1m54.234045564s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.211766361 +0000 UTC m=+135.178412669" watchObservedRunningTime="2026-02-19 00:11:16.234045564 +0000 UTC m=+135.200691872" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.274051 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" podStartSLOduration=5.274031656 podStartE2EDuration="5.274031656s" podCreationTimestamp="2026-02-19 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.268439948 +0000 UTC m=+135.235086256" watchObservedRunningTime="2026-02-19 00:11:16.274031656 +0000 UTC m=+135.240677964" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.274087 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-9dxbw"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.306626 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k745b"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.316899 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wjnbj" podStartSLOduration=114.316867846 podStartE2EDuration="1m54.316867846s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.303231183 +0000 UTC m=+135.269877491" watchObservedRunningTime="2026-02-19 00:11:16.316867846 +0000 UTC m=+135.283514154" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.329429 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.329817 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.82979921 +0000 UTC m=+135.796445518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.382636 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-kv2lw" podStartSLOduration=114.382619754 podStartE2EDuration="1m54.382619754s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.381435723 +0000 UTC m=+135.348082031" watchObservedRunningTime="2026-02-19 00:11:16.382619754 +0000 UTC m=+135.349266062" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.431328 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.433579 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.434119 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:16.934100443 +0000 UTC m=+135.900746751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: W0219 00:11:16.475133 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c8e3873_4ad3_41bf_b79b_ab2730ea58be.slice/crio-2f330eb4f54b7b3e19e9b5efc4c1de3ff9d758620b3c8671c75054d35f51819e WatchSource:0}: Error finding container 2f330eb4f54b7b3e19e9b5efc4c1de3ff9d758620b3c8671c75054d35f51819e: Status 404 returned error can't find the container with id 2f330eb4f54b7b3e19e9b5efc4c1de3ff9d758620b3c8671c75054d35f51819e Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.512567 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.529619 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.531227 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" podStartSLOduration=113.531204066 podStartE2EDuration="1m53.531204066s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.509868949 +0000 UTC m=+135.476515257" watchObservedRunningTime="2026-02-19 00:11:16.531204066 +0000 UTC m=+135.497850374" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.544098 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.544443 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.044429117 +0000 UTC m=+136.011075425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.545192 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.558307 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podStartSLOduration=113.558291306 podStartE2EDuration="1m53.558291306s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.555694446 +0000 UTC m=+135.522340754" watchObservedRunningTime="2026-02-19 00:11:16.558291306 +0000 UTC m=+135.524937614" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.575329 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.578721 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-dnr7x"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.600393 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.639561 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nhp7g" podStartSLOduration=114.639548026 podStartE2EDuration="1m54.639548026s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.614919722 +0000 UTC m=+135.581566030" watchObservedRunningTime="2026-02-19 00:11:16.639548026 +0000 UTC m=+135.606194334" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.645704 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.646402 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.146380528 +0000 UTC m=+136.113026836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.649032 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.654200 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.658539 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.659598 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.667881 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-7f8nt"] Feb 19 00:11:16 crc kubenswrapper[5108]: W0219 00:11:16.697563 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d20929d_50ab_4bea_8fe0_c3963930537f.slice/crio-ad032ee04d9e8a05fe002dd088e5bdd1d2b3e440595622144ccb7c82ed50bba2 WatchSource:0}: Error finding container ad032ee04d9e8a05fe002dd088e5bdd1d2b3e440595622144ccb7c82ed50bba2: Status 404 returned error can't find the container with id ad032ee04d9e8a05fe002dd088e5bdd1d2b3e440595622144ccb7c82ed50bba2 Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.747764 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.748354 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.24833264 +0000 UTC m=+136.214978938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.754990 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zxqlg"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.793061 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dtlcj"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.795131 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.815017 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xgf9n"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.843913 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" event={"ID":"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e","Type":"ContainerStarted","Data":"2e4bd628b8a2496508184be73b0832d4090e13fee3e57ccab8ae8dfec18cdd05"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.850741 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.851009 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.350991839 +0000 UTC m=+136.317638147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.851295 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.851741 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.351713019 +0000 UTC m=+136.318359327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.859324 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4lmnf"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.869890 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zt82k"] Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.877578 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-9dxbw" event={"ID":"6975a144-b433-427f-9319-27a9b81143ef","Type":"ContainerStarted","Data":"ec834d09c727fb02c31d81a0feb2301ad8d2ae381cefd363dbc603f32ff43926"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.880681 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" event={"ID":"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3","Type":"ContainerStarted","Data":"a4370bb0fb47c8ebf7ba4444a07af12ba089e2eb8fdd094093455126993b4b25"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.891391 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" event={"ID":"021ddbaf-7df5-4911-afaa-609338cbcd9b","Type":"ContainerStarted","Data":"d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.907644 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" event={"ID":"cecf671f-2c8e-4821-8047-f740b18c3d04","Type":"ContainerStarted","Data":"ebed3bd96620aa937f4ed8281f1287d721f7846c5db5b44f98ce7ff20f5b3f79"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.921095 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" event={"ID":"27cb0844-2028-4cfa-acba-18e5d2c57986","Type":"ContainerStarted","Data":"a1df82e44ed0dab93cab23db48de34f79472700c755b1c1f576d51e419755b64"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.927176 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-khg6s" podStartSLOduration=113.927149775 podStartE2EDuration="1m53.927149775s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.926572979 +0000 UTC m=+135.893219287" watchObservedRunningTime="2026-02-19 00:11:16.927149775 +0000 UTC m=+135.893796083" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.935848 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" event={"ID":"dda1c305-da89-4c31-a229-073abe8757de","Type":"ContainerStarted","Data":"3dd54045d1b0b58184a88fd65ac64ea5ac9e0e3355e06143decaddb3b613f38b"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.935920 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" event={"ID":"dda1c305-da89-4c31-a229-073abe8757de","Type":"ContainerStarted","Data":"bcd875db8313d2f1ef1670c53c44a3f1f2ee9616666a10e3e24834cd240bfbee"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.938315 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.948649 5108 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-nnwxr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.948712 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" podUID="dda1c305-da89-4c31-a229-073abe8757de" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 19 00:11:16 crc kubenswrapper[5108]: W0219 00:11:16.949570 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0f0187_4ee8_4a18_bbd2_578a4831e5e5.slice/crio-220ea92c586cf2522ceaf07f5cd4bdb8b51161365c61acd4e903fd8bfca4555a WatchSource:0}: Error finding container 220ea92c586cf2522ceaf07f5cd4bdb8b51161365c61acd4e903fd8bfca4555a: Status 404 returned error can't find the container with id 220ea92c586cf2522ceaf07f5cd4bdb8b51161365c61acd4e903fd8bfca4555a Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.954027 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:16 crc kubenswrapper[5108]: E0219 00:11:16.955597 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.455570661 +0000 UTC m=+136.422216969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.962952 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" event={"ID":"360f6faf-c020-47cd-9b9e-3b931df6bf11","Type":"ContainerStarted","Data":"8cbb19710c401e2483cc6fcf7c59f36656fb581915c5d736fbcf0944c519c956"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.963003 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" event={"ID":"360f6faf-c020-47cd-9b9e-3b931df6bf11","Type":"ContainerStarted","Data":"e8a7f833b15be5eae7be951cf050bafa55785097349331a05f99ddd452120719"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.963098 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" podStartSLOduration=113.96308506 podStartE2EDuration="1m53.96308506s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.962647549 +0000 UTC m=+135.929293847" watchObservedRunningTime="2026-02-19 00:11:16.96308506 +0000 UTC m=+135.929731368" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.968118 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zcpqk" event={"ID":"470ce3a4-986e-4d2f-91a7-127e9d03d057","Type":"ContainerStarted","Data":"45081b36259c0da207870d9250c10ad61183694f1a30452130985e50496bf3ab"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.973627 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" event={"ID":"1a52a4e5-9502-4222-8090-3c18943abd74","Type":"ContainerStarted","Data":"fa13d7402dddbb0419dcb7fe4aae6ffe81ef24a23a6c7293a64c2dc31bdacef8"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.981299 5108 generic.go:358] "Generic (PLEG): container finished" podID="8df7b2bf-ae29-417e-a699-a4d6140db6ff" containerID="6eb94aabec84f7271228266c0cf2dc30eaad1f9f8c53b425c4711fb9895cf6f8" exitCode=0 Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.981966 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" event={"ID":"8df7b2bf-ae29-417e-a699-a4d6140db6ff","Type":"ContainerDied","Data":"6eb94aabec84f7271228266c0cf2dc30eaad1f9f8c53b425c4711fb9895cf6f8"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.982021 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" event={"ID":"8df7b2bf-ae29-417e-a699-a4d6140db6ff","Type":"ContainerStarted","Data":"a5c1aac6e3eed73ff5f957e24d4c01d9978ddf45eea9e3b6d0c233d0677d33d1"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.984047 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" event={"ID":"7d20929d-50ab-4bea-8fe0-c3963930537f","Type":"ContainerStarted","Data":"ad032ee04d9e8a05fe002dd088e5bdd1d2b3e440595622144ccb7c82ed50bba2"} Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.990643 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:16 crc kubenswrapper[5108]: I0219 00:11:16.999381 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zcpqk" podStartSLOduration=5.999363615 podStartE2EDuration="5.999363615s" podCreationTimestamp="2026-02-19 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.997636739 +0000 UTC m=+135.964283037" watchObservedRunningTime="2026-02-19 00:11:16.999363615 +0000 UTC m=+135.966009923" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.013104 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-k745b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.013181 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" podUID="1a52a4e5-9502-4222-8090-3c18943abd74" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.021134 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:17 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:17 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:17 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.021215 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.025175 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.026055 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" event={"ID":"98aac6ae-e129-4ce6-9b45-3eb23232be7d","Type":"ContainerStarted","Data":"ea5c255a8314bbb17740c6f35062e4c72816f13f5981b65535206c94266b4ca4"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.029579 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" event={"ID":"cac53ce4-90c0-4d12-8250-97e095faa921","Type":"ContainerStarted","Data":"fb456814032c6023acd260acfdb061351f233d8c95a635349fcfb917b6d34a85"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.061147 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.066988 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.566965793 +0000 UTC m=+136.533612291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.090661 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" podStartSLOduration=115.090629452 podStartE2EDuration="1m55.090629452s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:17.090193051 +0000 UTC m=+136.056839359" watchObservedRunningTime="2026-02-19 00:11:17.090629452 +0000 UTC m=+136.057275760" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.101309 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-6vnnq" event={"ID":"1972f121-c7ba-4edb-817f-093975dff371","Type":"ContainerStarted","Data":"5455de5cbd7d73958955ed49764470cd8ec6a0112b5efc8459a08a478810fc41"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.102632 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-6vnnq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.102684 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-6vnnq" podUID="1972f121-c7ba-4edb-817f-093975dff371" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.129873 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" podStartSLOduration=114.129855215 podStartE2EDuration="1m54.129855215s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:17.129206668 +0000 UTC m=+136.095852976" watchObservedRunningTime="2026-02-19 00:11:17.129855215 +0000 UTC m=+136.096501523" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.137089 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" event={"ID":"68ccd36e-bf71-4a9b-93e5-8e972ecef049","Type":"ContainerStarted","Data":"55abcc109218e4f43a4f2e4d14b90f8eaf3af83cc3ce87f7798c006f1de3a32e"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.142540 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" event={"ID":"bba37545-146a-4d15-8fc4-4a3c3ef1efab","Type":"ContainerStarted","Data":"9c61ebc76fe4acf63c633ec3a5894ec22f6b425abe49919665e7ec0990623ca2"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.163304 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.164617 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.664600119 +0000 UTC m=+136.631246427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.167401 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" event={"ID":"a65dad46-b3c3-4025-9c41-acdb4c614e7f","Type":"ContainerStarted","Data":"eea0ec5fea5c89dd41641130b1fecb42ad2079a7947b03182c1b169043a6e64a"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.193247 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" event={"ID":"5af44a88-046f-4a49-aa06-a2cdf10eb333","Type":"ContainerStarted","Data":"525b7eaa46090f574dbe9cf238a7a5f60949a3cc2f722baad290c02c9bf1ccaa"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.219630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" event={"ID":"4c8e3873-4ad3-41bf-b79b-ab2730ea58be","Type":"ContainerStarted","Data":"2f330eb4f54b7b3e19e9b5efc4c1de3ff9d758620b3c8671c75054d35f51819e"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.243339 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" event={"ID":"708be0b2-c6b4-4167-a1cb-e71e5c078013","Type":"ContainerStarted","Data":"319477946a501721b55ccfc8188a18b8b669e8b0624112a9ff23ed314c851f36"} Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.265304 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.272900 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.772880509 +0000 UTC m=+136.739526997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.303397 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" podStartSLOduration=115.303362819 podStartE2EDuration="1m55.303362819s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:17.257421817 +0000 UTC m=+136.224068126" watchObservedRunningTime="2026-02-19 00:11:17.303362819 +0000 UTC m=+136.270009127" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.303537 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-2d228" podStartSLOduration=114.303532474 podStartE2EDuration="1m54.303532474s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:17.292536261 +0000 UTC m=+136.259182589" watchObservedRunningTime="2026-02-19 00:11:17.303532474 +0000 UTC m=+136.270178772" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.369881 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.370335 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.8703144 +0000 UTC m=+136.836960718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.472305 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.473926 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:17.973907885 +0000 UTC m=+136.940554193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.575092 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.575645 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.07562399 +0000 UTC m=+137.042270298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.632523 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-kv2lw" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.679102 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.679600 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.179582335 +0000 UTC m=+137.146228643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.781002 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.781721 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.28170056 +0000 UTC m=+137.248346868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.829280 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.829406 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.851879 5108 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-lhp9s container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]log ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]etcd ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/max-in-flight-filter ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 00:11:17 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 00:11:17 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-startinformers ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 00:11:17 crc kubenswrapper[5108]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 00:11:17 crc kubenswrapper[5108]: livez check failed Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.852042 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" podUID="347e23fe-fd18-4ee1-a333-1302eefd97e8" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.883312 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.883625 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.383611701 +0000 UTC m=+137.350258009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.921804 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.921868 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.922010 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.968796 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dhrv8"] Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.987298 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.987882 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.487836492 +0000 UTC m=+137.454482810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:17 crc kubenswrapper[5108]: I0219 00:11:17.988496 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:17 crc kubenswrapper[5108]: E0219 00:11:17.989317 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.489308591 +0000 UTC m=+137.455954899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.019547 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:18 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:18 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:18 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.019629 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.089637 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.089797 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.589772373 +0000 UTC m=+137.556418681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.090236 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.090535 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.590528453 +0000 UTC m=+137.557174761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.191104 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.191512 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.691494868 +0000 UTC m=+137.658141176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.292379 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.292978 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.792955866 +0000 UTC m=+137.759602174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.299729 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" event={"ID":"ac66a2db-d1f5-4e58-a397-3cd38d4fdbd3","Type":"ContainerStarted","Data":"6e88717822ec2d24c3ac047d37333f5745fa8eb8274b09405c5b13abfba61eea"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.313102 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" event={"ID":"685c7729-e78d-4436-90ba-8e2097c0faac","Type":"ContainerStarted","Data":"64a52e2d7edf0480c42f230e7edc0f07215d5e5bb6fba098a042d95d33876c34"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.338305 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-9dxbw" event={"ID":"6975a144-b433-427f-9319-27a9b81143ef","Type":"ContainerStarted","Data":"564d0aba459446abbf68c7d410a53eca12e54ac389e35be6eff3cfed7bdddc89"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.347763 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" event={"ID":"8df7b2bf-ae29-417e-a699-a4d6140db6ff","Type":"ContainerStarted","Data":"8f8ba4d6167bcda8f2b8b8cf99234cb991a9dc060cd8b791e699a6764da63075"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.350668 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.352578 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-kxf7j" podStartSLOduration=115.352554221 podStartE2EDuration="1m55.352554221s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.333357771 +0000 UTC m=+137.300004089" watchObservedRunningTime="2026-02-19 00:11:18.352554221 +0000 UTC m=+137.319200529" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.362104 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-7f8nt" event={"ID":"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0","Type":"ContainerStarted","Data":"9213f76d41eab6d120c831144d24d83263673071aa141bb23247d06784d5dbc7"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.362158 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-7f8nt" event={"ID":"b29e4e22-2bf8-4616-81ab-ce2d3f21c6d0","Type":"ContainerStarted","Data":"d284265008e8b86fac7199a9fb7ad81b7cfca5216808784612f4ce95515cb62f"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.379108 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" event={"ID":"cac53ce4-90c0-4d12-8250-97e095faa921","Type":"ContainerStarted","Data":"32771f335b7232afd778edbf57851af3a318fd45a4aa7a2c9126d71476588e27"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.393566 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.395259 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.895231316 +0000 UTC m=+137.861877624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.396825 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" event={"ID":"68ccd36e-bf71-4a9b-93e5-8e972ecef049","Type":"ContainerStarted","Data":"8478ecf22d34741798cdedea14bb274b04a7060c6ca1b014b6ccc9ab3eb55c39"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.407998 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" podStartSLOduration=116.407980835 podStartE2EDuration="1m56.407980835s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.402502759 +0000 UTC m=+137.369149067" watchObservedRunningTime="2026-02-19 00:11:18.407980835 +0000 UTC m=+137.374627143" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.409020 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-9dxbw" podStartSLOduration=116.409014113 podStartE2EDuration="1m56.409014113s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.377544005 +0000 UTC m=+137.344190333" watchObservedRunningTime="2026-02-19 00:11:18.409014113 +0000 UTC m=+137.375660421" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.414294 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" event={"ID":"bba37545-146a-4d15-8fc4-4a3c3ef1efab","Type":"ContainerStarted","Data":"6daabf09c1ebc6a9f1f9b19b171bfac3efd292baf873380138678cff1544645d"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.431324 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" event={"ID":"d08b33e3-428b-460b-b3ff-56ffbf1c68f2","Type":"ContainerStarted","Data":"cd57d46da9f28cfd3bc7c8188f1acf40226fe3f987130e31fab948ef36eb9860"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.451160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zxqlg" event={"ID":"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5","Type":"ContainerStarted","Data":"220ea92c586cf2522ceaf07f5cd4bdb8b51161365c61acd4e903fd8bfca4555a"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.456139 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" event={"ID":"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e","Type":"ContainerStarted","Data":"9844ef6db40d0e66674f4973bcba6bf7ecbc377a1ee0d322b5e9af3dd5b8132e"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.460705 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-7f8nt" podStartSLOduration=115.460694477 podStartE2EDuration="1m55.460694477s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.438966739 +0000 UTC m=+137.405613047" watchObservedRunningTime="2026-02-19 00:11:18.460694477 +0000 UTC m=+137.427340785" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.471957 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" event={"ID":"c7c22258-4003-4696-805b-422c06068fe9","Type":"ContainerStarted","Data":"f7672bedc6c3e5bc6d60365ac7da88016eb69050b9532221478336538fdb110a"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.472006 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" event={"ID":"c7c22258-4003-4696-805b-422c06068fe9","Type":"ContainerStarted","Data":"1be1ce24b055042d33e254d26161978daad4201972ca974afa83303b6304592f"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.495177 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.497254 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xgf9n" event={"ID":"5457fc3a-6263-4957-9cc1-09d6364eba65","Type":"ContainerStarted","Data":"2f2579706ae83f28156deb1443803a46af188723b725168a3c22696f0a8fb562"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.497307 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xgf9n" event={"ID":"5457fc3a-6263-4957-9cc1-09d6364eba65","Type":"ContainerStarted","Data":"051e7a1c86ee92e7ab1fafdf5aad387d8c814596fbb12440e94878b9cf367e19"} Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.497320 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:18.997307851 +0000 UTC m=+137.963954159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.504752 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-n2fr8" podStartSLOduration=115.504735888 podStartE2EDuration="1m55.504735888s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.461751005 +0000 UTC m=+137.428397303" watchObservedRunningTime="2026-02-19 00:11:18.504735888 +0000 UTC m=+137.471382196" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.506825 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-xkb77" podStartSLOduration=115.506816884 podStartE2EDuration="1m55.506816884s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.496918811 +0000 UTC m=+137.463565119" watchObservedRunningTime="2026-02-19 00:11:18.506816884 +0000 UTC m=+137.473463192" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.519717 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" event={"ID":"27cb0844-2028-4cfa-acba-18e5d2c57986","Type":"ContainerStarted","Data":"59e58d597c00053d6fa25e183dbb1e9156ffd6a3a74cbb7b1bb1766db509b971"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.530897 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-t8zzc" event={"ID":"360f6faf-c020-47cd-9b9e-3b931df6bf11","Type":"ContainerStarted","Data":"c34605154314ee77a71e447648766d554157dedbd5f4eed66c7cadc81b2da1c6"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.534795 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" event={"ID":"45c6feda-c272-4a12-b1fb-ad25af916694","Type":"ContainerStarted","Data":"0a4e6c78fd799c7266392129f254b390b3fd42139622e769c090320b00689194"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.534823 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" event={"ID":"45c6feda-c272-4a12-b1fb-ad25af916694","Type":"ContainerStarted","Data":"4140a2caceadd0be56e41d9cad1262961885920cfff52255e3b26bf568556522"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.534833 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" event={"ID":"45c6feda-c272-4a12-b1fb-ad25af916694","Type":"ContainerStarted","Data":"b384f75a74b7d1f36a2dcf2a654c09cf8bee75cda4aba7e00a9e85f4499d3800"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.554680 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" event={"ID":"1a52a4e5-9502-4222-8090-3c18943abd74","Type":"ContainerStarted","Data":"9f7294aa24b5b6a57fcfe7a4cba4d508dfc953f676ec6b571542117ef5f6d5f5"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.554831 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8fkns" podStartSLOduration=115.55481499 podStartE2EDuration="1m55.55481499s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.553040343 +0000 UTC m=+137.519686651" watchObservedRunningTime="2026-02-19 00:11:18.55481499 +0000 UTC m=+137.521461298" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.563106 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-k745b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.563172 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" podUID="1a52a4e5-9502-4222-8090-3c18943abd74" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.591805 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" event={"ID":"7d20929d-50ab-4bea-8fe0-c3963930537f","Type":"ContainerStarted","Data":"14128a4d0cd708c6ac3493e9aec19812a74407629530e54a74ce95d27ca07f3f"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.591904 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.591917 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" event={"ID":"7d20929d-50ab-4bea-8fe0-c3963930537f","Type":"ContainerStarted","Data":"9b9c340b984fbdc619569ff0b486e768a8b7595b4f0e63aedf0617d1f7940ba3"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.596268 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.597211 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.097176586 +0000 UTC m=+138.063822904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.612583 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" event={"ID":"98aac6ae-e129-4ce6-9b45-3eb23232be7d","Type":"ContainerStarted","Data":"92d3607e7e2c2d52509626e677d701dd9456d30adc1c450b0c248feb78187c10"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.635494 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" podStartSLOduration=115.635470905 podStartE2EDuration="1m55.635470905s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.627992426 +0000 UTC m=+137.594638734" watchObservedRunningTime="2026-02-19 00:11:18.635470905 +0000 UTC m=+137.602117213" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.635641 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xgf9n" podStartSLOduration=8.635634539 podStartE2EDuration="8.635634539s" podCreationTimestamp="2026-02-19 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.585764643 +0000 UTC m=+137.552410951" watchObservedRunningTime="2026-02-19 00:11:18.635634539 +0000 UTC m=+137.602280847" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.650126 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" event={"ID":"d1327638-8c00-4315-be3c-f9f8c70720d0","Type":"ContainerStarted","Data":"b8b5c868984f4c228ce6ffc53a1ce30992bd2347dcc934cfbfe67d83ff248575"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.650205 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" event={"ID":"d1327638-8c00-4315-be3c-f9f8c70720d0","Type":"ContainerStarted","Data":"e804faf273116fcfd0dcba126ba65b262fece39d92aa1feba7f28d7e8dac7999"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.681084 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" event={"ID":"a65dad46-b3c3-4025-9c41-acdb4c614e7f","Type":"ContainerStarted","Data":"1ef81954630951146260fbaaf855650c176e0f54f5d0ee922d405883a49b7f9d"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.682798 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-z6xts" podStartSLOduration=115.682775112 podStartE2EDuration="1m55.682775112s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.679247859 +0000 UTC m=+137.645894167" watchObservedRunningTime="2026-02-19 00:11:18.682775112 +0000 UTC m=+137.649421420" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.683842 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.710237 5108 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-ffsjp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.710327 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" podUID="a65dad46-b3c3-4025-9c41-acdb4c614e7f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.711810 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.712970 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.212956855 +0000 UTC m=+138.179603153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.726814 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" event={"ID":"5af44a88-046f-4a49-aa06-a2cdf10eb333","Type":"ContainerStarted","Data":"989832b8bef612e5c6f7fd0c1df35831b09412c2166d6fbe6ff6e88822d14d30"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.728159 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.740681 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.756008 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-47dzx" event={"ID":"4c8e3873-4ad3-41bf-b79b-ab2730ea58be","Type":"ContainerStarted","Data":"22f9a8daa286acbab443359408f6fb7644a2cc3c00318b17ecf512eaf65722bf"} Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.759117 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-6vnnq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.759184 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-6vnnq" podUID="1972f121-c7ba-4edb-817f-093975dff371" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.768506 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-dtlcj" podStartSLOduration=115.768487932 podStartE2EDuration="1m55.768487932s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.727265786 +0000 UTC m=+137.693912094" watchObservedRunningTime="2026-02-19 00:11:18.768487932 +0000 UTC m=+137.735134230" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.769468 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2554r" podStartSLOduration=115.769462488 podStartE2EDuration="1m55.769462488s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.768208964 +0000 UTC m=+137.734855272" watchObservedRunningTime="2026-02-19 00:11:18.769462488 +0000 UTC m=+137.736108796" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.775947 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-9qw7s" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.796106 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-nnwxr" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.813498 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.814732 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.314716112 +0000 UTC m=+138.281362420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.854756 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" podStartSLOduration=115.854739516 podStartE2EDuration="1m55.854739516s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.830262565 +0000 UTC m=+137.796908873" watchObservedRunningTime="2026-02-19 00:11:18.854739516 +0000 UTC m=+137.821385814" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.889585 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" podStartSLOduration=115.889566602 podStartE2EDuration="1m55.889566602s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.855541237 +0000 UTC m=+137.822187545" watchObservedRunningTime="2026-02-19 00:11:18.889566602 +0000 UTC m=+137.856212900" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.915077 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:18 crc kubenswrapper[5108]: E0219 00:11:18.915477 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.415463531 +0000 UTC m=+138.382109839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.934463 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" podStartSLOduration=115.934442575 podStartE2EDuration="1m55.934442575s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.892745956 +0000 UTC m=+137.859392264" watchObservedRunningTime="2026-02-19 00:11:18.934442575 +0000 UTC m=+137.901088883" Feb 19 00:11:18 crc kubenswrapper[5108]: I0219 00:11:18.935996 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" podStartSLOduration=115.935985996 podStartE2EDuration="1m55.935985996s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:18.935216036 +0000 UTC m=+137.901862354" watchObservedRunningTime="2026-02-19 00:11:18.935985996 +0000 UTC m=+137.902632304" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.018385 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.018697 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.518663055 +0000 UTC m=+138.485309363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.019125 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.019633 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.51962108 +0000 UTC m=+138.486267388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.027279 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:19 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:19 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:19 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.027367 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.120546 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.121228 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.621206882 +0000 UTC m=+138.587853190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.222741 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.223100 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.723083561 +0000 UTC m=+138.689729869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.324732 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.324967 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.824919499 +0000 UTC m=+138.791565807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.325094 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.325737 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.825728881 +0000 UTC m=+138.792375189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.426605 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.426790 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.926758188 +0000 UTC m=+138.893404496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.427451 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.427778 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.927766355 +0000 UTC m=+138.894412663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.528434 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.528639 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.028611887 +0000 UTC m=+138.995258195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.529288 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.529625 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.029611753 +0000 UTC m=+138.996258061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.630566 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.630762 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.130730011 +0000 UTC m=+139.097376329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.630959 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.631294 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.131281167 +0000 UTC m=+139.097927475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.667353 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7g27t"] Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.672378 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.678218 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.684837 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7g27t"] Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.732611 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.732799 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.232765335 +0000 UTC m=+139.199411643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.733153 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-utilities\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.733228 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.733256 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-catalog-content\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.733387 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb5mx\" (UniqueName: \"kubernetes.io/projected/0aefb89a-2ddc-4334-9bab-28390ba5a389-kube-api-access-tb5mx\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.733591 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.233569477 +0000 UTC m=+139.200215995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.768253 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" event={"ID":"685c7729-e78d-4436-90ba-8e2097c0faac","Type":"ContainerStarted","Data":"1e8d83b0a4be45219b183e8b7def786362694b2006b5e060018134475711e810"} Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.768332 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" event={"ID":"685c7729-e78d-4436-90ba-8e2097c0faac","Type":"ContainerStarted","Data":"c141e7f0b3e9b6d95ac9eb1c0161cfd8f098de47a50e7cc3081c97e7ddf24c73"} Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.772834 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zxqlg" event={"ID":"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5","Type":"ContainerStarted","Data":"9c72d86411c36c7a19e46c1b0e4a021acb055bc71e1343b84e741052e3afda60"} Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.772892 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zxqlg" event={"ID":"dd0f0187-4ee8-4a18-bbd2-578a4831e5e5","Type":"ContainerStarted","Data":"71c51cba78552daf80a70a742a4e11b5d4f435aa47a4e415fc6941d5b7d20c9f"} Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.773090 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.783532 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" event={"ID":"17d3ac16-ba7a-4a9b-a8e4-c98fac6e214e","Type":"ContainerStarted","Data":"f511fcc0e5ef3e3d556afd51126579ceaf41a52b80aba2e9c30e6170dfe7d206"} Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.786555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-x2qdv" event={"ID":"c7c22258-4003-4696-805b-422c06068fe9","Type":"ContainerStarted","Data":"554669f5f0dda5ce19ca5f4862f6eafe69aa24639e6443c023ef6f832dc7a435"} Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.788365 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" podUID="021ddbaf-7df5-4911-afaa-609338cbcd9b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" gracePeriod=30 Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.789847 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-k745b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.789917 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" podUID="1a52a4e5-9502-4222-8090-3c18943abd74" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.796807 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-zt82k" podStartSLOduration=116.796787378 podStartE2EDuration="1m56.796787378s" podCreationTimestamp="2026-02-19 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:19.795642287 +0000 UTC m=+138.762288585" watchObservedRunningTime="2026-02-19 00:11:19.796787378 +0000 UTC m=+138.763433686" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.804353 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ffsjp" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.828352 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zxqlg" podStartSLOduration=8.828330307 podStartE2EDuration="8.828330307s" podCreationTimestamp="2026-02-19 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:19.826133618 +0000 UTC m=+138.792779936" watchObservedRunningTime="2026-02-19 00:11:19.828330307 +0000 UTC m=+138.794976625" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.835718 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.836442 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-utilities\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.836559 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-catalog-content\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.837286 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tb5mx\" (UniqueName: \"kubernetes.io/projected/0aefb89a-2ddc-4334-9bab-28390ba5a389-kube-api-access-tb5mx\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.837361 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.337342626 +0000 UTC m=+139.303988934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.854493 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-catalog-content\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.859365 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-utilities\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.898044 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-dnr7x" podStartSLOduration=117.89802337 podStartE2EDuration="1m57.89802337s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:19.864542119 +0000 UTC m=+138.831188427" watchObservedRunningTime="2026-02-19 00:11:19.89802337 +0000 UTC m=+138.864669678" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.907084 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb5mx\" (UniqueName: \"kubernetes.io/projected/0aefb89a-2ddc-4334-9bab-28390ba5a389-kube-api-access-tb5mx\") pod \"certified-operators-7g27t\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.924350 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bf6wt"] Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.940564 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:19 crc kubenswrapper[5108]: E0219 00:11:19.940878 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.440865379 +0000 UTC m=+139.407511687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.952326 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.955057 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.958174 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bf6wt"] Feb 19 00:11:19 crc kubenswrapper[5108]: I0219 00:11:19.985296 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.029792 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:20 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:20 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:20 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.029860 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.053909 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.054122 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.554089351 +0000 UTC m=+139.520735659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.054868 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.054916 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-catalog-content\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.055017 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmtt\" (UniqueName: \"kubernetes.io/projected/391cbbed-1038-47a8-aad5-bbe7e5cea901-kube-api-access-9tmtt\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.055117 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-utilities\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.055441 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.555422106 +0000 UTC m=+139.522068414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.064645 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-np88t"] Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.084485 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.090197 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-np88t"] Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.157093 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.157382 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-utilities\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.157517 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-utilities\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.157546 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-catalog-content\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.157646 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6q4\" (UniqueName: \"kubernetes.io/projected/56dc0859-c6fc-47fd-ab9c-25e116306330-kube-api-access-9d6q4\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.157678 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-catalog-content\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.157700 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmtt\" (UniqueName: \"kubernetes.io/projected/391cbbed-1038-47a8-aad5-bbe7e5cea901-kube-api-access-9tmtt\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.158206 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.658177888 +0000 UTC m=+139.624824196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.158543 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-catalog-content\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.158828 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-utilities\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.186858 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmtt\" (UniqueName: \"kubernetes.io/projected/391cbbed-1038-47a8-aad5-bbe7e5cea901-kube-api-access-9tmtt\") pod \"community-operators-bf6wt\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.259622 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-utilities\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.259671 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-catalog-content\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.259751 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.259778 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6q4\" (UniqueName: \"kubernetes.io/projected/56dc0859-c6fc-47fd-ab9c-25e116306330-kube-api-access-9d6q4\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.261527 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-catalog-content\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.261618 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-utilities\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.261840 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.761825935 +0000 UTC m=+139.728472243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.274590 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lqwnl"] Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.285197 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6q4\" (UniqueName: \"kubernetes.io/projected/56dc0859-c6fc-47fd-ab9c-25e116306330-kube-api-access-9d6q4\") pod \"certified-operators-np88t\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.291280 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.319100 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.325407 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqwnl"] Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.330071 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53692: no serving certificate available for the kubelet" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.363997 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.364290 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22np\" (UniqueName: \"kubernetes.io/projected/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-kube-api-access-z22np\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.364382 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-utilities\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.364442 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-catalog-content\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.364553 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.864536327 +0000 UTC m=+139.831182625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.421453 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53704: no serving certificate available for the kubelet" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.443211 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.465725 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-utilities\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.465796 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.465850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-catalog-content\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.465918 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z22np\" (UniqueName: \"kubernetes.io/projected/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-kube-api-access-z22np\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.466300 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:20.966281522 +0000 UTC m=+139.932927830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.466851 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-catalog-content\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.473295 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-utilities\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.495247 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22np\" (UniqueName: \"kubernetes.io/projected/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-kube-api-access-z22np\") pod \"community-operators-lqwnl\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.528780 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7g27t"] Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.530331 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53710: no serving certificate available for the kubelet" Feb 19 00:11:20 crc kubenswrapper[5108]: W0219 00:11:20.545646 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aefb89a_2ddc_4334_9bab_28390ba5a389.slice/crio-4dbb53113a3aeff4d91c0da8f759613feeb2c5f6d6019293a98d6e9b6a7ad7cc WatchSource:0}: Error finding container 4dbb53113a3aeff4d91c0da8f759613feeb2c5f6d6019293a98d6e9b6a7ad7cc: Status 404 returned error can't find the container with id 4dbb53113a3aeff4d91c0da8f759613feeb2c5f6d6019293a98d6e9b6a7ad7cc Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.567398 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.567889 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.067869974 +0000 UTC m=+140.034516282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.620034 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53716: no serving certificate available for the kubelet" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.663138 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.671072 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.671384 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.171371966 +0000 UTC m=+140.138018274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.738376 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53726: no serving certificate available for the kubelet" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.760438 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bf6wt"] Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.773275 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.773516 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.273481622 +0000 UTC m=+140.240127950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.773665 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.774466 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.274436747 +0000 UTC m=+140.241083065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.819004 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-np88t"] Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.827525 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" event={"ID":"d08b33e3-428b-460b-b3ff-56ffbf1c68f2","Type":"ContainerStarted","Data":"752d047eeefaac6564b91b0041a179cb9019d10c97db64aabd8254ce7a48f1cc"} Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.838270 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g27t" event={"ID":"0aefb89a-2ddc-4334-9bab-28390ba5a389","Type":"ContainerStarted","Data":"77caeaa12cc4a5e04b77cf882496c3a68eeda12defa19b774aba151e4866c27b"} Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.838336 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g27t" event={"ID":"0aefb89a-2ddc-4334-9bab-28390ba5a389","Type":"ContainerStarted","Data":"4dbb53113a3aeff4d91c0da8f759613feeb2c5f6d6019293a98d6e9b6a7ad7cc"} Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.851774 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53732: no serving certificate available for the kubelet" Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.852380 5108 generic.go:358] "Generic (PLEG): container finished" podID="d1327638-8c00-4315-be3c-f9f8c70720d0" containerID="b8b5c868984f4c228ce6ffc53a1ce30992bd2347dcc934cfbfe67d83ff248575" exitCode=0 Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.852433 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" event={"ID":"d1327638-8c00-4315-be3c-f9f8c70720d0","Type":"ContainerDied","Data":"b8b5c868984f4c228ce6ffc53a1ce30992bd2347dcc934cfbfe67d83ff248575"} Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.874808 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.874922 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.374904518 +0000 UTC m=+140.341550826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.875244 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.875674 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.375665939 +0000 UTC m=+140.342312247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.976263 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.976524 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.47648652 +0000 UTC m=+140.443132838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.977352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:20 crc kubenswrapper[5108]: E0219 00:11:20.977702 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.477688171 +0000 UTC m=+140.444334469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:20 crc kubenswrapper[5108]: I0219 00:11:20.997431 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqwnl"] Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.018614 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:21 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:21 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:21 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.018691 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:21 crc kubenswrapper[5108]: W0219 00:11:21.031068 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726b3fe7_f433_4a31_a1df_fd2aa1aacda4.slice/crio-a8a31d7675cc6ed8fbd7c7e06365552bec50815bf0e932e01e8b78f240c647e1 WatchSource:0}: Error finding container a8a31d7675cc6ed8fbd7c7e06365552bec50815bf0e932e01e8b78f240c647e1: Status 404 returned error can't find the container with id a8a31d7675cc6ed8fbd7c7e06365552bec50815bf0e932e01e8b78f240c647e1 Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.064356 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53748: no serving certificate available for the kubelet" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.078664 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.080273 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.580222369 +0000 UTC m=+140.546868677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.182302 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.183485 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.683464224 +0000 UTC m=+140.650110532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.284082 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.284202 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.784169902 +0000 UTC m=+140.750816210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.284630 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.285003 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.784996074 +0000 UTC m=+140.751642382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.386187 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.386489 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.886470572 +0000 UTC m=+140.853116880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.427266 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53758: no serving certificate available for the kubelet" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.451100 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.465796 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.477207 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.477655 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.491915 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.492618 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:21.992600275 +0000 UTC m=+140.959246593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.495620 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.593948 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.594086 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.094056754 +0000 UTC m=+141.060703062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.594567 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.594665 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42cf11b0-c684-4732-a90f-08e028c943ef-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"42cf11b0-c684-4732-a90f-08e028c943ef\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.594705 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42cf11b0-c684-4732-a90f-08e028c943ef-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"42cf11b0-c684-4732-a90f-08e028c943ef\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.595201 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.095183324 +0000 UTC m=+141.061829632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.695453 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.695741 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42cf11b0-c684-4732-a90f-08e028c943ef-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"42cf11b0-c684-4732-a90f-08e028c943ef\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.695767 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42cf11b0-c684-4732-a90f-08e028c943ef-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"42cf11b0-c684-4732-a90f-08e028c943ef\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.695920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42cf11b0-c684-4732-a90f-08e028c943ef-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"42cf11b0-c684-4732-a90f-08e028c943ef\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.696056 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.196039966 +0000 UTC m=+141.162686274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.723587 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42cf11b0-c684-4732-a90f-08e028c943ef-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"42cf11b0-c684-4732-a90f-08e028c943ef\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.796905 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.798068 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.798407 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.298390567 +0000 UTC m=+141.265037055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.867610 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-df8pn"] Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.869868 5108 generic.go:358] "Generic (PLEG): container finished" podID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerID="e206af99fb789eaaea7085dd13e69e3f9f9860f2a16597d83050023d2a4eaf56" exitCode=0 Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.879299 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-np88t" event={"ID":"56dc0859-c6fc-47fd-ab9c-25e116306330","Type":"ContainerDied","Data":"e206af99fb789eaaea7085dd13e69e3f9f9860f2a16597d83050023d2a4eaf56"} Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.879353 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-np88t" event={"ID":"56dc0859-c6fc-47fd-ab9c-25e116306330","Type":"ContainerStarted","Data":"11f2fe8c04bb7a3b8eed82003e1261717c532ff2686dee85c5406417e292e39d"} Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.879481 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.880442 5108 generic.go:358] "Generic (PLEG): container finished" podID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerID="77caeaa12cc4a5e04b77cf882496c3a68eeda12defa19b774aba151e4866c27b" exitCode=0 Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.880560 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g27t" event={"ID":"0aefb89a-2ddc-4334-9bab-28390ba5a389","Type":"ContainerDied","Data":"77caeaa12cc4a5e04b77cf882496c3a68eeda12defa19b774aba151e4866c27b"} Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.883490 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.896200 5108 generic.go:358] "Generic (PLEG): container finished" podID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerID="65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9" exitCode=0 Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.897407 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-df8pn"] Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.897497 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqwnl" event={"ID":"726b3fe7-f433-4a31-a1df-fd2aa1aacda4","Type":"ContainerDied","Data":"65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9"} Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.897532 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqwnl" event={"ID":"726b3fe7-f433-4a31-a1df-fd2aa1aacda4","Type":"ContainerStarted","Data":"a8a31d7675cc6ed8fbd7c7e06365552bec50815bf0e932e01e8b78f240c647e1"} Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.898840 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.898984 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.398953141 +0000 UTC m=+141.365599439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.899876 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:21 crc kubenswrapper[5108]: E0219 00:11:21.901015 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.401006946 +0000 UTC m=+141.367653254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.901033 5108 generic.go:358] "Generic (PLEG): container finished" podID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerID="495a3d48fea778b33a19b80bedce40554d6f2ef21ee5c0e37358a319fd3f60d9" exitCode=0 Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.902065 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf6wt" event={"ID":"391cbbed-1038-47a8-aad5-bbe7e5cea901","Type":"ContainerDied","Data":"495a3d48fea778b33a19b80bedce40554d6f2ef21ee5c0e37358a319fd3f60d9"} Feb 19 00:11:21 crc kubenswrapper[5108]: I0219 00:11:21.902115 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf6wt" event={"ID":"391cbbed-1038-47a8-aad5-bbe7e5cea901","Type":"ContainerStarted","Data":"d58c95913d1e7cf3e2be8d1d7d20249958a23ee7c1006df6cfcdafb47ea29556"} Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.007853 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.008381 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-catalog-content\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.008460 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-utilities\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.008676 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc4lq\" (UniqueName: \"kubernetes.io/projected/7024eadd-8a38-49f7-996f-bb49882d226e-kube-api-access-kc4lq\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.008828 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.508805043 +0000 UTC m=+141.475451351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.020157 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:22 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:22 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:22 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.020249 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.109893 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-catalog-content\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.109974 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.109993 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-utilities\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.110100 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kc4lq\" (UniqueName: \"kubernetes.io/projected/7024eadd-8a38-49f7-996f-bb49882d226e-kube-api-access-kc4lq\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.110790 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-catalog-content\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.111049 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.611037142 +0000 UTC m=+141.577683450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.111363 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-utilities\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.140893 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc4lq\" (UniqueName: \"kubernetes.io/projected/7024eadd-8a38-49f7-996f-bb49882d226e-kube-api-access-kc4lq\") pod \"redhat-marketplace-df8pn\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.145340 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53764: no serving certificate available for the kubelet" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.198266 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.215114 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.232044 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.715785517 +0000 UTC m=+141.682431835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.246662 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.273029 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-csqkl"] Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.286676 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.296642 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-csqkl"] Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.317463 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-utilities\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.317502 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ppx\" (UniqueName: \"kubernetes.io/projected/70754f09-86a2-4b82-b04c-72dc6aa70b7b-kube-api-access-k9ppx\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.317523 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-catalog-content\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.317578 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.317865 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.817851032 +0000 UTC m=+141.784497340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.384095 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.419415 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.419520 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4t9x\" (UniqueName: \"kubernetes.io/projected/d1327638-8c00-4315-be3c-f9f8c70720d0-kube-api-access-z4t9x\") pod \"d1327638-8c00-4315-be3c-f9f8c70720d0\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.419663 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1327638-8c00-4315-be3c-f9f8c70720d0-config-volume\") pod \"d1327638-8c00-4315-be3c-f9f8c70720d0\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.419731 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1327638-8c00-4315-be3c-f9f8c70720d0-secret-volume\") pod \"d1327638-8c00-4315-be3c-f9f8c70720d0\" (UID: \"d1327638-8c00-4315-be3c-f9f8c70720d0\") " Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.419864 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-utilities\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.419897 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ppx\" (UniqueName: \"kubernetes.io/projected/70754f09-86a2-4b82-b04c-72dc6aa70b7b-kube-api-access-k9ppx\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.419914 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-catalog-content\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.420275 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:22.920216264 +0000 UTC m=+141.886862572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.420442 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-catalog-content\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.420518 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-utilities\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.421411 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1327638-8c00-4315-be3c-f9f8c70720d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "d1327638-8c00-4315-be3c-f9f8c70720d0" (UID: "d1327638-8c00-4315-be3c-f9f8c70720d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.449301 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1327638-8c00-4315-be3c-f9f8c70720d0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d1327638-8c00-4315-be3c-f9f8c70720d0" (UID: "d1327638-8c00-4315-be3c-f9f8c70720d0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.451206 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1327638-8c00-4315-be3c-f9f8c70720d0-kube-api-access-z4t9x" (OuterVolumeSpecName: "kube-api-access-z4t9x") pod "d1327638-8c00-4315-be3c-f9f8c70720d0" (UID: "d1327638-8c00-4315-be3c-f9f8c70720d0"). InnerVolumeSpecName "kube-api-access-z4t9x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.457633 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ppx\" (UniqueName: \"kubernetes.io/projected/70754f09-86a2-4b82-b04c-72dc6aa70b7b-kube-api-access-k9ppx\") pod \"redhat-marketplace-csqkl\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.521191 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.521641 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1327638-8c00-4315-be3c-f9f8c70720d0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.521653 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1327638-8c00-4315-be3c-f9f8c70720d0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.521663 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4t9x\" (UniqueName: \"kubernetes.io/projected/d1327638-8c00-4315-be3c-f9f8c70720d0-kube-api-access-z4t9x\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.522679 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.022660668 +0000 UTC m=+141.989306976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.597268 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-df8pn"] Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.622381 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.622887 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.122862942 +0000 UTC m=+142.089509250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.656866 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.724456 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.724839 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.224821774 +0000 UTC m=+142.191468082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.830215 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.830595 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.330578277 +0000 UTC m=+142.297224585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.841550 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.848190 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-lhp9s" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.858955 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pgh2p"] Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.859734 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1327638-8c00-4315-be3c-f9f8c70720d0" containerName="collect-profiles" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.859807 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1327638-8c00-4315-be3c-f9f8c70720d0" containerName="collect-profiles" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.859986 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1327638-8c00-4315-be3c-f9f8c70720d0" containerName="collect-profiles" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.868299 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.870359 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.894453 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgh2p"] Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.934708 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-utilities\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.934804 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.934864 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-catalog-content\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.934952 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnbz\" (UniqueName: \"kubernetes.io/projected/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-kube-api-access-mqnbz\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:22 crc kubenswrapper[5108]: E0219 00:11:22.937441 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.437421868 +0000 UTC m=+142.404068176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.962487 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df8pn" event={"ID":"7024eadd-8a38-49f7-996f-bb49882d226e","Type":"ContainerStarted","Data":"7ec3a2858f2024d311ea006441198cc284e7da46e39e56cfdf63f269f0354c58"} Feb 19 00:11:22 crc kubenswrapper[5108]: I0219 00:11:22.962535 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df8pn" event={"ID":"7024eadd-8a38-49f7-996f-bb49882d226e","Type":"ContainerStarted","Data":"c65b14e7a7b4ab37d036f683a1228f2175fc6db62377b79910d621c57dec15ac"} Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.007438 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-csqkl"] Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.010500 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"42cf11b0-c684-4732-a90f-08e028c943ef","Type":"ContainerStarted","Data":"2f96ff65cfcbf97cc7c751b1c9e50f04df016497c2c5d75bc8dc86b41efdec1a"} Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.010555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"42cf11b0-c684-4732-a90f-08e028c943ef","Type":"ContainerStarted","Data":"297c5efeb716dbf52564660a68c256b6522c8357e9c18edc1b184ffb41885c46"} Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.026517 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.026705 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-h7njr" event={"ID":"d1327638-8c00-4315-be3c-f9f8c70720d0","Type":"ContainerDied","Data":"e804faf273116fcfd0dcba126ba65b262fece39d92aa1feba7f28d7e8dac7999"} Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.026743 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e804faf273116fcfd0dcba126ba65b262fece39d92aa1feba7f28d7e8dac7999" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.037873 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.038212 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mqnbz\" (UniqueName: \"kubernetes.io/projected/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-kube-api-access-mqnbz\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.039013 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:23 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:23 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:23 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.039096 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.049963 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.549900759 +0000 UTC m=+142.516547067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.050060 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-utilities\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.050143 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.050218 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-catalog-content\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.050922 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-catalog-content\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.051189 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.551182033 +0000 UTC m=+142.517828331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.051285 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-utilities\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.090438 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqnbz\" (UniqueName: \"kubernetes.io/projected/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-kube-api-access-mqnbz\") pod \"redhat-operators-pgh2p\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.094637 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.094618818 podStartE2EDuration="2.094618818s" podCreationTimestamp="2026-02-19 00:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:23.082338242 +0000 UTC m=+142.048984550" watchObservedRunningTime="2026-02-19 00:11:23.094618818 +0000 UTC m=+142.061265126" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.151163 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.151415 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.651392728 +0000 UTC m=+142.618039036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.151851 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.153099 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.653090353 +0000 UTC m=+142.619736661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.198010 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.252953 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.253302 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.753284368 +0000 UTC m=+142.719930676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.274218 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-98tbv"] Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.295396 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.322325 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98tbv"] Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.354565 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-catalog-content\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.354710 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lppr5\" (UniqueName: \"kubernetes.io/projected/9656b253-82c5-4759-acc7-a885d8757845-kube-api-access-lppr5\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.354762 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.354919 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-utilities\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.355154 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.855139606 +0000 UTC m=+142.821785914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.457664 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.458167 5108 ???:1] "http: TLS handshake error from 192.168.126.11:53768: no serving certificate available for the kubelet" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.458296 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.958037403 +0000 UTC m=+142.924683711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.458910 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-utilities\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.459452 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-catalog-content\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.459685 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-utilities\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.459865 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-catalog-content\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.460232 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lppr5\" (UniqueName: \"kubernetes.io/projected/9656b253-82c5-4759-acc7-a885d8757845-kube-api-access-lppr5\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.460355 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.460788 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:23.960777936 +0000 UTC m=+142.927424244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.495060 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lppr5\" (UniqueName: \"kubernetes.io/projected/9656b253-82c5-4759-acc7-a885d8757845-kube-api-access-lppr5\") pod \"redhat-operators-98tbv\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.562180 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.562538 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.062510362 +0000 UTC m=+143.029156670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.562654 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.563399 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.063388894 +0000 UTC m=+143.030035202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.603169 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgh2p"] Feb 19 00:11:23 crc kubenswrapper[5108]: W0219 00:11:23.621701 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod664a83e1_cb9d_4e9d_85c7_88a01dc6d040.slice/crio-ca760ea02ff63c01eefee534db66301d7e5518ea841bb0779a1b3fabe141c884 WatchSource:0}: Error finding container ca760ea02ff63c01eefee534db66301d7e5518ea841bb0779a1b3fabe141c884: Status 404 returned error can't find the container with id ca760ea02ff63c01eefee534db66301d7e5518ea841bb0779a1b3fabe141c884 Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.626133 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.664781 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.665222 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.165206282 +0000 UTC m=+143.131852600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.769452 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.769977 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.269961508 +0000 UTC m=+143.236607816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.871223 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.871393 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.371364195 +0000 UTC m=+143.338010503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.872898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.873391 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.373378489 +0000 UTC m=+143.340024797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.934224 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98tbv"] Feb 19 00:11:23 crc kubenswrapper[5108]: W0219 00:11:23.948829 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9656b253_82c5_4759_acc7_a885d8757845.slice/crio-914ca979bc74c9f2c46912a7aa5088df1c959ebbe9f1f219d5d6b7e0f726f01b WatchSource:0}: Error finding container 914ca979bc74c9f2c46912a7aa5088df1c959ebbe9f1f219d5d6b7e0f726f01b: Status 404 returned error can't find the container with id 914ca979bc74c9f2c46912a7aa5088df1c959ebbe9f1f219d5d6b7e0f726f01b Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.968145 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-6vnnq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.968208 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-6vnnq" podUID="1972f121-c7ba-4edb-817f-093975dff371" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.974603 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.974809 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.474782735 +0000 UTC m=+143.441429043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.975028 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:23 crc kubenswrapper[5108]: E0219 00:11:23.975370 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.47535835 +0000 UTC m=+143.442004658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:23 crc kubenswrapper[5108]: I0219 00:11:23.992303 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-w5c5q" Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.003022 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.003058 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.004547 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-9dxbw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.004590 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-9dxbw" podUID="6975a144-b433-427f-9319-27a9b81143ef" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.013991 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.029501 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n8lfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:24 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Feb 19 00:11:24 crc kubenswrapper[5108]: [+]process-running ok Feb 19 00:11:24 crc kubenswrapper[5108]: healthz check failed Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.029572 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" podUID="99f4ecd3-c69e-46b1-b2d1-e7d4bff3899f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.056965 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98tbv" event={"ID":"9656b253-82c5-4759-acc7-a885d8757845","Type":"ContainerStarted","Data":"914ca979bc74c9f2c46912a7aa5088df1c959ebbe9f1f219d5d6b7e0f726f01b"} Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.060239 5108 generic.go:358] "Generic (PLEG): container finished" podID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerID="f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9" exitCode=0 Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.060400 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csqkl" event={"ID":"70754f09-86a2-4b82-b04c-72dc6aa70b7b","Type":"ContainerDied","Data":"f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9"} Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.060447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csqkl" event={"ID":"70754f09-86a2-4b82-b04c-72dc6aa70b7b","Type":"ContainerStarted","Data":"a1a62e1225b0a64b4aaf965c13e1a50bbbb140d69e1a5d9dcae782ed752bd88b"} Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.070114 5108 generic.go:358] "Generic (PLEG): container finished" podID="7024eadd-8a38-49f7-996f-bb49882d226e" containerID="7ec3a2858f2024d311ea006441198cc284e7da46e39e56cfdf63f269f0354c58" exitCode=0 Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.070206 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df8pn" event={"ID":"7024eadd-8a38-49f7-996f-bb49882d226e","Type":"ContainerDied","Data":"7ec3a2858f2024d311ea006441198cc284e7da46e39e56cfdf63f269f0354c58"} Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.072242 5108 generic.go:358] "Generic (PLEG): container finished" podID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerID="4b9f9e159f7a35077e60ab929362eeb9552f4fed3ec23346fd69ccf88a3dbd74" exitCode=0 Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.072328 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgh2p" event={"ID":"664a83e1-cb9d-4e9d-85c7-88a01dc6d040","Type":"ContainerDied","Data":"4b9f9e159f7a35077e60ab929362eeb9552f4fed3ec23346fd69ccf88a3dbd74"} Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.072345 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgh2p" event={"ID":"664a83e1-cb9d-4e9d-85c7-88a01dc6d040","Type":"ContainerStarted","Data":"ca760ea02ff63c01eefee534db66301d7e5518ea841bb0779a1b3fabe141c884"} Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.076076 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.076289 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.576258284 +0000 UTC m=+143.542904592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.076746 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.078684 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.578672098 +0000 UTC m=+143.545318496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.088822 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" event={"ID":"d08b33e3-428b-460b-b3ff-56ffbf1c68f2","Type":"ContainerStarted","Data":"6d2090cdc8af5d46379a813950deac5c325f9375f825b038a88d104ab2c9315f"} Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.102683 5108 generic.go:358] "Generic (PLEG): container finished" podID="42cf11b0-c684-4732-a90f-08e028c943ef" containerID="2f96ff65cfcbf97cc7c751b1c9e50f04df016497c2c5d75bc8dc86b41efdec1a" exitCode=0 Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.102772 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"42cf11b0-c684-4732-a90f-08e028c943ef","Type":"ContainerDied","Data":"2f96ff65cfcbf97cc7c751b1c9e50f04df016497c2c5d75bc8dc86b41efdec1a"} Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.178460 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.180141 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.680119816 +0000 UTC m=+143.646766124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.210066 5108 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.281154 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.281640 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.781626595 +0000 UTC m=+143.748272903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.382705 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.382849 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.882826536 +0000 UTC m=+143.849472854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.383025 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.383376 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.883364531 +0000 UTC m=+143.850010839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.484824 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.485160 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:24.985144208 +0000 UTC m=+143.951790516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.586335 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.586722 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:25.086709868 +0000 UTC m=+144.053356176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.651428 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.690542 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.690916 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:25.190899799 +0000 UTC m=+144.157546107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.793623 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.794368 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:25.29435209 +0000 UTC m=+144.260998398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.894590 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.894788 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:25.39476017 +0000 UTC m=+144.361406478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.895248 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.895707 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:25.395695696 +0000 UTC m=+144.362342004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qv7jb" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:24 crc kubenswrapper[5108]: I0219 00:11:24.996684 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:24 crc kubenswrapper[5108]: E0219 00:11:24.997088 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:25.497071341 +0000 UTC m=+144.463717639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.001010 5108 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T00:11:24.210168785Z","UUID":"fe124cd2-a105-4068-839b-b27e17642f67","Handler":null,"Name":"","Endpoint":""} Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.003673 5108 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.003700 5108 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.022434 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.033503 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-n8lfg" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.099750 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.134326 5108 generic.go:358] "Generic (PLEG): container finished" podID="9656b253-82c5-4759-acc7-a885d8757845" containerID="ce6c49aa7499bf317b5f76b5f0a57d15303008e4c2bbf41d875c4d2b5549de45" exitCode=0 Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.134664 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98tbv" event={"ID":"9656b253-82c5-4759-acc7-a885d8757845","Type":"ContainerDied","Data":"ce6c49aa7499bf317b5f76b5f0a57d15303008e4c2bbf41d875c4d2b5549de45"} Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.135531 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.135577 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.163317 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qv7jb\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.171360 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" event={"ID":"d08b33e3-428b-460b-b3ff-56ffbf1c68f2","Type":"ContainerStarted","Data":"45d9f847ebf203785369705d2b9ac0af17b879a8427079106a0927c4d2bc4c18"} Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.171405 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" event={"ID":"d08b33e3-428b-460b-b3ff-56ffbf1c68f2","Type":"ContainerStarted","Data":"82814ccd9a0ad705f7d7a10616c37f1c31dfe5f05419c6717aefe789cd828b2e"} Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.200748 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4lmnf" podStartSLOduration=15.200729318 podStartE2EDuration="15.200729318s" podCreationTimestamp="2026-02-19 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:25.19481648 +0000 UTC m=+144.161462788" watchObservedRunningTime="2026-02-19 00:11:25.200729318 +0000 UTC m=+144.167375626" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.202472 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.221912 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.277495 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.322108 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.334780 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.337801 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.338485 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.344972 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.423650 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.424114 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.505739 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.542242 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.542646 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.543518 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.575685 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.613127 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qv7jb"] Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.643640 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42cf11b0-c684-4732-a90f-08e028c943ef-kubelet-dir\") pod \"42cf11b0-c684-4732-a90f-08e028c943ef\" (UID: \"42cf11b0-c684-4732-a90f-08e028c943ef\") " Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.643800 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42cf11b0-c684-4732-a90f-08e028c943ef-kube-api-access\") pod \"42cf11b0-c684-4732-a90f-08e028c943ef\" (UID: \"42cf11b0-c684-4732-a90f-08e028c943ef\") " Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.644413 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42cf11b0-c684-4732-a90f-08e028c943ef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42cf11b0-c684-4732-a90f-08e028c943ef" (UID: "42cf11b0-c684-4732-a90f-08e028c943ef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.651200 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cf11b0-c684-4732-a90f-08e028c943ef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42cf11b0-c684-4732-a90f-08e028c943ef" (UID: "42cf11b0-c684-4732-a90f-08e028c943ef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.681814 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.751422 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42cf11b0-c684-4732-a90f-08e028c943ef-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.751461 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42cf11b0-c684-4732-a90f-08e028c943ef-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.868081 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 19 00:11:25 crc kubenswrapper[5108]: I0219 00:11:25.952558 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 19 00:11:25 crc kubenswrapper[5108]: W0219 00:11:25.977861 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod587cab42_53b6_4b3f_a6a2_7fb27f5a8427.slice/crio-9799641aa62b486dc978b1fde1af649f9f0da3ba6cfe20dbd5bc12dc014d9ff3 WatchSource:0}: Error finding container 9799641aa62b486dc978b1fde1af649f9f0da3ba6cfe20dbd5bc12dc014d9ff3: Status 404 returned error can't find the container with id 9799641aa62b486dc978b1fde1af649f9f0da3ba6cfe20dbd5bc12dc014d9ff3 Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.053514 5108 ???:1] "http: TLS handshake error from 192.168.126.11:51830: no serving certificate available for the kubelet" Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.183735 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.184272 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"42cf11b0-c684-4732-a90f-08e028c943ef","Type":"ContainerDied","Data":"297c5efeb716dbf52564660a68c256b6522c8357e9c18edc1b184ffb41885c46"} Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.184297 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="297c5efeb716dbf52564660a68c256b6522c8357e9c18edc1b184ffb41885c46" Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.192639 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"587cab42-53b6-4b3f-a6a2-7fb27f5a8427","Type":"ContainerStarted","Data":"9799641aa62b486dc978b1fde1af649f9f0da3ba6cfe20dbd5bc12dc014d9ff3"} Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.199807 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" event={"ID":"223e4146-2005-4ad4-8fff-1d248c0f8a4d","Type":"ContainerStarted","Data":"c46259d94cf35026a5da629070d90e9c663bfd17983df53225b46d36f1c25c1b"} Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.199887 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" event={"ID":"223e4146-2005-4ad4-8fff-1d248c0f8a4d","Type":"ContainerStarted","Data":"eb6fd57c01bbab0c1ec617f4f32dec2d4f69621d02a1906b5aadfad5ec784999"} Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.200503 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:26 crc kubenswrapper[5108]: I0219 00:11:26.229350 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" podStartSLOduration=124.229330111 podStartE2EDuration="2m4.229330111s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:26.222306954 +0000 UTC m=+145.188953262" watchObservedRunningTime="2026-02-19 00:11:26.229330111 +0000 UTC m=+145.195976419" Feb 19 00:11:26 crc kubenswrapper[5108]: E0219 00:11:26.896743 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:26 crc kubenswrapper[5108]: E0219 00:11:26.898302 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:26 crc kubenswrapper[5108]: E0219 00:11:26.899677 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:26 crc kubenswrapper[5108]: E0219 00:11:26.899704 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" podUID="021ddbaf-7df5-4911-afaa-609338cbcd9b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 19 00:11:27 crc kubenswrapper[5108]: I0219 00:11:27.212820 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"587cab42-53b6-4b3f-a6a2-7fb27f5a8427","Type":"ContainerStarted","Data":"302195527b0f6ca0a79f641b7733710032b4184f86b486718b1f53887fada486"} Feb 19 00:11:27 crc kubenswrapper[5108]: I0219 00:11:27.231232 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.231210235 podStartE2EDuration="2.231210235s" podCreationTimestamp="2026-02-19 00:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:27.225274438 +0000 UTC m=+146.191920766" watchObservedRunningTime="2026-02-19 00:11:27.231210235 +0000 UTC m=+146.197856543" Feb 19 00:11:28 crc kubenswrapper[5108]: I0219 00:11:28.221501 5108 generic.go:358] "Generic (PLEG): container finished" podID="587cab42-53b6-4b3f-a6a2-7fb27f5a8427" containerID="302195527b0f6ca0a79f641b7733710032b4184f86b486718b1f53887fada486" exitCode=0 Feb 19 00:11:28 crc kubenswrapper[5108]: I0219 00:11:28.221666 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"587cab42-53b6-4b3f-a6a2-7fb27f5a8427","Type":"ContainerDied","Data":"302195527b0f6ca0a79f641b7733710032b4184f86b486718b1f53887fada486"} Feb 19 00:11:28 crc kubenswrapper[5108]: I0219 00:11:28.757622 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-6vnnq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 19 00:11:28 crc kubenswrapper[5108]: I0219 00:11:28.757679 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-6vnnq" podUID="1972f121-c7ba-4edb-817f-093975dff371" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.677332 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.792255 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.828952 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kubelet-dir\") pod \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\" (UID: \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\") " Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.829085 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "587cab42-53b6-4b3f-a6a2-7fb27f5a8427" (UID: "587cab42-53b6-4b3f-a6a2-7fb27f5a8427"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.829465 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kube-api-access\") pod \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\" (UID: \"587cab42-53b6-4b3f-a6a2-7fb27f5a8427\") " Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.829861 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.854619 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "587cab42-53b6-4b3f-a6a2-7fb27f5a8427" (UID: "587cab42-53b6-4b3f-a6a2-7fb27f5a8427"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.931143 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/587cab42-53b6-4b3f-a6a2-7fb27f5a8427-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:29 crc kubenswrapper[5108]: I0219 00:11:29.988804 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zxqlg" Feb 19 00:11:30 crc kubenswrapper[5108]: I0219 00:11:30.242184 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:30 crc kubenswrapper[5108]: I0219 00:11:30.242185 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"587cab42-53b6-4b3f-a6a2-7fb27f5a8427","Type":"ContainerDied","Data":"9799641aa62b486dc978b1fde1af649f9f0da3ba6cfe20dbd5bc12dc014d9ff3"} Feb 19 00:11:30 crc kubenswrapper[5108]: I0219 00:11:30.242334 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9799641aa62b486dc978b1fde1af649f9f0da3ba6cfe20dbd5bc12dc014d9ff3" Feb 19 00:11:30 crc kubenswrapper[5108]: I0219 00:11:30.466258 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:11:31 crc kubenswrapper[5108]: I0219 00:11:31.205631 5108 ???:1] "http: TLS handshake error from 192.168.126.11:51838: no serving certificate available for the kubelet" Feb 19 00:11:34 crc kubenswrapper[5108]: I0219 00:11:34.015751 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:34 crc kubenswrapper[5108]: I0219 00:11:34.022524 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-9dxbw" Feb 19 00:11:36 crc kubenswrapper[5108]: E0219 00:11:36.896821 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:36 crc kubenswrapper[5108]: E0219 00:11:36.898917 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:36 crc kubenswrapper[5108]: E0219 00:11:36.900538 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:36 crc kubenswrapper[5108]: E0219 00:11:36.900579 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" podUID="021ddbaf-7df5-4911-afaa-609338cbcd9b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 19 00:11:38 crc kubenswrapper[5108]: I0219 00:11:38.762589 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-6vnnq" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.307417 5108 generic.go:358] "Generic (PLEG): container finished" podID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerID="6733933b926060d269425f4cdb6afed8c55843109e9c1cf0c40d68d452b9b7c2" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.307507 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-np88t" event={"ID":"56dc0859-c6fc-47fd-ab9c-25e116306330","Type":"ContainerDied","Data":"6733933b926060d269425f4cdb6afed8c55843109e9c1cf0c40d68d452b9b7c2"} Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.310359 5108 generic.go:358] "Generic (PLEG): container finished" podID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerID="8f3e6d149cb943ee95cf07fd25c2cc98a080ff08897bab8cb65a6e9e8b149d00" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.310522 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g27t" event={"ID":"0aefb89a-2ddc-4334-9bab-28390ba5a389","Type":"ContainerDied","Data":"8f3e6d149cb943ee95cf07fd25c2cc98a080ff08897bab8cb65a6e9e8b149d00"} Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.312642 5108 generic.go:358] "Generic (PLEG): container finished" podID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerID="9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.312772 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqwnl" event={"ID":"726b3fe7-f433-4a31-a1df-fd2aa1aacda4","Type":"ContainerDied","Data":"9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f"} Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.315559 5108 generic.go:358] "Generic (PLEG): container finished" podID="9656b253-82c5-4759-acc7-a885d8757845" containerID="52a01072ea5a1b4e5bacf104835c771dd19633f065cb8769166441cc9bc2f1b9" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.315636 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98tbv" event={"ID":"9656b253-82c5-4759-acc7-a885d8757845","Type":"ContainerDied","Data":"52a01072ea5a1b4e5bacf104835c771dd19633f065cb8769166441cc9bc2f1b9"} Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.318887 5108 generic.go:358] "Generic (PLEG): container finished" podID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerID="63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.318974 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csqkl" event={"ID":"70754f09-86a2-4b82-b04c-72dc6aa70b7b","Type":"ContainerDied","Data":"63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766"} Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.321788 5108 generic.go:358] "Generic (PLEG): container finished" podID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerID="99e67a8bfbe8b0f39958777bc45cba02b813b5d9b1e692fa23dfc5de03ed2819" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.322097 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf6wt" event={"ID":"391cbbed-1038-47a8-aad5-bbe7e5cea901","Type":"ContainerDied","Data":"99e67a8bfbe8b0f39958777bc45cba02b813b5d9b1e692fa23dfc5de03ed2819"} Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.331850 5108 generic.go:358] "Generic (PLEG): container finished" podID="7024eadd-8a38-49f7-996f-bb49882d226e" containerID="499bf7708614511bbbd3e2e6cfe47e5c3eca104ffaa13331c006d8b012183b5d" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.332001 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df8pn" event={"ID":"7024eadd-8a38-49f7-996f-bb49882d226e","Type":"ContainerDied","Data":"499bf7708614511bbbd3e2e6cfe47e5c3eca104ffaa13331c006d8b012183b5d"} Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.340065 5108 generic.go:358] "Generic (PLEG): container finished" podID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerID="e5a9555c5f9a3ee1fe3244db5fc8de41a71f45676afd98ddf86bbbc588828177" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.340143 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgh2p" event={"ID":"664a83e1-cb9d-4e9d-85c7-88a01dc6d040","Type":"ContainerDied","Data":"e5a9555c5f9a3ee1fe3244db5fc8de41a71f45676afd98ddf86bbbc588828177"} Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.593479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.594060 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.603499 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.606075 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.694006 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.695207 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.695273 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.695330 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.704843 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.705398 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/766a3580-a7a9-49f7-8948-2d949558d2d2-metrics-certs\") pod \"network-metrics-daemon-2clv5\" (UID: \"766a3580-a7a9-49f7-8948-2d949558d2d2\") " pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.705846 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.764505 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clv5" Feb 19 00:11:39 crc kubenswrapper[5108]: W0219 00:11:39.936412 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-57d2d3d89c2a976912dd9b07bd4a1d6dc2c808e863087448591d1f61d8e9bf86 WatchSource:0}: Error finding container 57d2d3d89c2a976912dd9b07bd4a1d6dc2c808e863087448591d1f61d8e9bf86: Status 404 returned error can't find the container with id 57d2d3d89c2a976912dd9b07bd4a1d6dc2c808e863087448591d1f61d8e9bf86 Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.969586 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.983978 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:39 crc kubenswrapper[5108]: I0219 00:11:39.987577 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2clv5"] Feb 19 00:11:40 crc kubenswrapper[5108]: W0219 00:11:40.001001 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod766a3580_a7a9_49f7_8948_2d949558d2d2.slice/crio-b9212bbde9147706592360412b811c432ab3b0b8a2ffd2b3a4f1d8424b4c3742 WatchSource:0}: Error finding container b9212bbde9147706592360412b811c432ab3b0b8a2ffd2b3a4f1d8424b4c3742: Status 404 returned error can't find the container with id b9212bbde9147706592360412b811c432ab3b0b8a2ffd2b3a4f1d8424b4c3742 Feb 19 00:11:40 crc kubenswrapper[5108]: W0219 00:11:40.223997 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-e6f85a24f8fd1d40cceebdf7f4bb9fb0863e8b491436f7ad8f56d96ca5f36e73 WatchSource:0}: Error finding container e6f85a24f8fd1d40cceebdf7f4bb9fb0863e8b491436f7ad8f56d96ca5f36e73: Status 404 returned error can't find the container with id e6f85a24f8fd1d40cceebdf7f4bb9fb0863e8b491436f7ad8f56d96ca5f36e73 Feb 19 00:11:40 crc kubenswrapper[5108]: W0219 00:11:40.239743 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-62f10f6e3a52f6856a7800aeafe4cfb1022dc8770ca19e23c5f981e38b5fc1d6 WatchSource:0}: Error finding container 62f10f6e3a52f6856a7800aeafe4cfb1022dc8770ca19e23c5f981e38b5fc1d6: Status 404 returned error can't find the container with id 62f10f6e3a52f6856a7800aeafe4cfb1022dc8770ca19e23c5f981e38b5fc1d6 Feb 19 00:11:40 crc kubenswrapper[5108]: I0219 00:11:40.357480 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"62f10f6e3a52f6856a7800aeafe4cfb1022dc8770ca19e23c5f981e38b5fc1d6"} Feb 19 00:11:40 crc kubenswrapper[5108]: I0219 00:11:40.358730 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"57d2d3d89c2a976912dd9b07bd4a1d6dc2c808e863087448591d1f61d8e9bf86"} Feb 19 00:11:40 crc kubenswrapper[5108]: I0219 00:11:40.361217 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2clv5" event={"ID":"766a3580-a7a9-49f7-8948-2d949558d2d2","Type":"ContainerStarted","Data":"b9212bbde9147706592360412b811c432ab3b0b8a2ffd2b3a4f1d8424b4c3742"} Feb 19 00:11:40 crc kubenswrapper[5108]: I0219 00:11:40.362615 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"e6f85a24f8fd1d40cceebdf7f4bb9fb0863e8b491436f7ad8f56d96ca5f36e73"} Feb 19 00:11:40 crc kubenswrapper[5108]: I0219 00:11:40.365394 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-np88t" event={"ID":"56dc0859-c6fc-47fd-ab9c-25e116306330","Type":"ContainerStarted","Data":"b5663ba63c709838a26639ae9dd26f72913463e4732ba3daa86117631cf183fa"} Feb 19 00:11:40 crc kubenswrapper[5108]: I0219 00:11:40.367521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g27t" event={"ID":"0aefb89a-2ddc-4334-9bab-28390ba5a389","Type":"ContainerStarted","Data":"d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954"} Feb 19 00:11:41 crc kubenswrapper[5108]: I0219 00:11:41.381728 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98tbv" event={"ID":"9656b253-82c5-4759-acc7-a885d8757845","Type":"ContainerStarted","Data":"649578f264c6cee6e689175ede93e92ddb662f1492a5db22dc2bf4f3a33489d0"} Feb 19 00:11:41 crc kubenswrapper[5108]: I0219 00:11:41.384923 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgh2p" event={"ID":"664a83e1-cb9d-4e9d-85c7-88a01dc6d040","Type":"ContainerStarted","Data":"a8163bc7543e908e819e02d90dae254a8028c133bb32588bec9906b432ffddb1"} Feb 19 00:11:41 crc kubenswrapper[5108]: I0219 00:11:41.469531 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56304: no serving certificate available for the kubelet" Feb 19 00:11:41 crc kubenswrapper[5108]: I0219 00:11:41.857030 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7g27t" podStartSLOduration=6.741771415 podStartE2EDuration="22.857010014s" podCreationTimestamp="2026-02-19 00:11:19 +0000 UTC" firstStartedPulling="2026-02-19 00:11:21.881844996 +0000 UTC m=+140.848491304" lastFinishedPulling="2026-02-19 00:11:37.997083555 +0000 UTC m=+156.963729903" observedRunningTime="2026-02-19 00:11:41.856195572 +0000 UTC m=+160.822841880" watchObservedRunningTime="2026-02-19 00:11:41.857010014 +0000 UTC m=+160.823656312" Feb 19 00:11:41 crc kubenswrapper[5108]: I0219 00:11:41.884059 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-np88t" podStartSLOduration=5.742089633 podStartE2EDuration="21.884037742s" podCreationTimestamp="2026-02-19 00:11:20 +0000 UTC" firstStartedPulling="2026-02-19 00:11:21.872355324 +0000 UTC m=+140.839001632" lastFinishedPulling="2026-02-19 00:11:38.014303403 +0000 UTC m=+156.980949741" observedRunningTime="2026-02-19 00:11:41.879487941 +0000 UTC m=+160.846134269" watchObservedRunningTime="2026-02-19 00:11:41.884037742 +0000 UTC m=+160.850684050" Feb 19 00:11:42 crc kubenswrapper[5108]: I0219 00:11:42.394585 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df8pn" event={"ID":"7024eadd-8a38-49f7-996f-bb49882d226e","Type":"ContainerStarted","Data":"5d07707604dc8cad65aed0301c1862990e0f5ee0f21acc3118e332386938b333"} Feb 19 00:11:42 crc kubenswrapper[5108]: I0219 00:11:42.400453 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf6wt" event={"ID":"391cbbed-1038-47a8-aad5-bbe7e5cea901","Type":"ContainerStarted","Data":"fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56"} Feb 19 00:11:42 crc kubenswrapper[5108]: I0219 00:11:42.754337 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pgh2p" podStartSLOduration=6.761833467 podStartE2EDuration="20.754126s" podCreationTimestamp="2026-02-19 00:11:22 +0000 UTC" firstStartedPulling="2026-02-19 00:11:24.073082039 +0000 UTC m=+143.039728347" lastFinishedPulling="2026-02-19 00:11:38.065374522 +0000 UTC m=+157.032020880" observedRunningTime="2026-02-19 00:11:42.728014416 +0000 UTC m=+161.694660724" watchObservedRunningTime="2026-02-19 00:11:42.754126 +0000 UTC m=+161.720772308" Feb 19 00:11:42 crc kubenswrapper[5108]: I0219 00:11:42.758344 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-98tbv" podStartSLOduration=6.830382165 podStartE2EDuration="19.758336382s" podCreationTimestamp="2026-02-19 00:11:23 +0000 UTC" firstStartedPulling="2026-02-19 00:11:25.135437831 +0000 UTC m=+144.102084139" lastFinishedPulling="2026-02-19 00:11:38.063392008 +0000 UTC m=+157.030038356" observedRunningTime="2026-02-19 00:11:42.754092849 +0000 UTC m=+161.720739187" watchObservedRunningTime="2026-02-19 00:11:42.758336382 +0000 UTC m=+161.724982700" Feb 19 00:11:42 crc kubenswrapper[5108]: I0219 00:11:42.777487 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-df8pn" podStartSLOduration=6.748327586 podStartE2EDuration="21.777463111s" podCreationTimestamp="2026-02-19 00:11:21 +0000 UTC" firstStartedPulling="2026-02-19 00:11:22.963469171 +0000 UTC m=+141.930115479" lastFinishedPulling="2026-02-19 00:11:37.992604666 +0000 UTC m=+156.959251004" observedRunningTime="2026-02-19 00:11:42.776396992 +0000 UTC m=+161.743043310" watchObservedRunningTime="2026-02-19 00:11:42.777463111 +0000 UTC m=+161.744109419" Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.199085 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.199176 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.413056 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqwnl" event={"ID":"726b3fe7-f433-4a31-a1df-fd2aa1aacda4","Type":"ContainerStarted","Data":"1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece"} Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.426259 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csqkl" event={"ID":"70754f09-86a2-4b82-b04c-72dc6aa70b7b","Type":"ContainerStarted","Data":"e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4"} Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.429641 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"62dc0d8e16cfa8791d18654b57e90711ffa50dfda22de045a12dd8dae332ded7"} Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.434402 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"d649531b34aa1d23aa5a69460e07efc5acc508bc5ef1418910687ec5daff1d06"} Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.436736 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"e239f0ddcc214abad75d0aff8c029d05024e8438ba4b2a67cbf3b395a0669544"} Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.446743 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lqwnl" podStartSLOduration=7.352110389 podStartE2EDuration="23.446719139s" podCreationTimestamp="2026-02-19 00:11:20 +0000 UTC" firstStartedPulling="2026-02-19 00:11:21.897829062 +0000 UTC m=+140.864475370" lastFinishedPulling="2026-02-19 00:11:37.992437772 +0000 UTC m=+156.959084120" observedRunningTime="2026-02-19 00:11:43.438629293 +0000 UTC m=+162.405275641" watchObservedRunningTime="2026-02-19 00:11:43.446719139 +0000 UTC m=+162.413365457" Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.487462 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bf6wt" podStartSLOduration=8.376060715 podStartE2EDuration="24.487431551s" podCreationTimestamp="2026-02-19 00:11:19 +0000 UTC" firstStartedPulling="2026-02-19 00:11:21.901927671 +0000 UTC m=+140.868573969" lastFinishedPulling="2026-02-19 00:11:38.013298467 +0000 UTC m=+156.979944805" observedRunningTime="2026-02-19 00:11:43.465283692 +0000 UTC m=+162.431930040" watchObservedRunningTime="2026-02-19 00:11:43.487431551 +0000 UTC m=+162.454077909" Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.627019 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:43 crc kubenswrapper[5108]: I0219 00:11:43.627071 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:44 crc kubenswrapper[5108]: I0219 00:11:44.444654 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2clv5" event={"ID":"766a3580-a7a9-49f7-8948-2d949558d2d2","Type":"ContainerStarted","Data":"56088521a73370f392692d38ca88aa96a50d3a085b1e40978191d7a5327f1f43"} Feb 19 00:11:44 crc kubenswrapper[5108]: I0219 00:11:44.598670 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:44 crc kubenswrapper[5108]: I0219 00:11:44.674821 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-csqkl" podStartSLOduration=8.745353856 podStartE2EDuration="22.674802957s" podCreationTimestamp="2026-02-19 00:11:22 +0000 UTC" firstStartedPulling="2026-02-19 00:11:24.068548859 +0000 UTC m=+143.035195167" lastFinishedPulling="2026-02-19 00:11:37.99799793 +0000 UTC m=+156.964644268" observedRunningTime="2026-02-19 00:11:44.640736692 +0000 UTC m=+163.607383000" watchObservedRunningTime="2026-02-19 00:11:44.674802957 +0000 UTC m=+163.641449265" Feb 19 00:11:44 crc kubenswrapper[5108]: I0219 00:11:44.984566 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-98tbv" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="registry-server" probeResult="failure" output=< Feb 19 00:11:44 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 19 00:11:44 crc kubenswrapper[5108]: > Feb 19 00:11:44 crc kubenswrapper[5108]: I0219 00:11:44.987438 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pgh2p" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="registry-server" probeResult="failure" output=< Feb 19 00:11:44 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Feb 19 00:11:44 crc kubenswrapper[5108]: > Feb 19 00:11:45 crc kubenswrapper[5108]: I0219 00:11:45.453829 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2clv5" event={"ID":"766a3580-a7a9-49f7-8948-2d949558d2d2","Type":"ContainerStarted","Data":"528b1889f99c79ae2738007c6bd26b420a1dc66f91a52ac4427fb3d48eafb18c"} Feb 19 00:11:46 crc kubenswrapper[5108]: E0219 00:11:46.894960 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:46 crc kubenswrapper[5108]: E0219 00:11:46.897488 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:46 crc kubenswrapper[5108]: E0219 00:11:46.900003 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:46 crc kubenswrapper[5108]: E0219 00:11:46.900044 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" podUID="021ddbaf-7df5-4911-afaa-609338cbcd9b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 19 00:11:47 crc kubenswrapper[5108]: I0219 00:11:47.219569 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:11:47 crc kubenswrapper[5108]: I0219 00:11:47.248799 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2clv5" podStartSLOduration=145.248762928 podStartE2EDuration="2m25.248762928s" podCreationTimestamp="2026-02-19 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:46.482673894 +0000 UTC m=+165.449320222" watchObservedRunningTime="2026-02-19 00:11:47.248762928 +0000 UTC m=+166.215409286" Feb 19 00:11:49 crc kubenswrapper[5108]: I0219 00:11:49.796571 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzqzz" Feb 19 00:11:49 crc kubenswrapper[5108]: I0219 00:11:49.990331 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:49 crc kubenswrapper[5108]: I0219 00:11:49.990739 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.070552 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.319982 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.320058 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.392596 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.445061 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.445115 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.489809 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dhrv8_021ddbaf-7df5-4911-afaa-609338cbcd9b/kube-multus-additional-cni-plugins/0.log" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.489872 5108 generic.go:358] "Generic (PLEG): container finished" podID="021ddbaf-7df5-4911-afaa-609338cbcd9b" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" exitCode=137 Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.489996 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" event={"ID":"021ddbaf-7df5-4911-afaa-609338cbcd9b","Type":"ContainerDied","Data":"d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53"} Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.500722 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.554861 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.565011 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.570635 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.663473 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.663541 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:50 crc kubenswrapper[5108]: I0219 00:11:50.712442 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.570353 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.811657 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dhrv8_021ddbaf-7df5-4911-afaa-609338cbcd9b/kube-multus-additional-cni-plugins/0.log" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.811763 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.883625 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/021ddbaf-7df5-4911-afaa-609338cbcd9b-tuning-conf-dir\") pod \"021ddbaf-7df5-4911-afaa-609338cbcd9b\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.883708 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/021ddbaf-7df5-4911-afaa-609338cbcd9b-cni-sysctl-allowlist\") pod \"021ddbaf-7df5-4911-afaa-609338cbcd9b\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.883733 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/021ddbaf-7df5-4911-afaa-609338cbcd9b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "021ddbaf-7df5-4911-afaa-609338cbcd9b" (UID: "021ddbaf-7df5-4911-afaa-609338cbcd9b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.883788 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/021ddbaf-7df5-4911-afaa-609338cbcd9b-ready\") pod \"021ddbaf-7df5-4911-afaa-609338cbcd9b\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.883888 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5t8l\" (UniqueName: \"kubernetes.io/projected/021ddbaf-7df5-4911-afaa-609338cbcd9b-kube-api-access-k5t8l\") pod \"021ddbaf-7df5-4911-afaa-609338cbcd9b\" (UID: \"021ddbaf-7df5-4911-afaa-609338cbcd9b\") " Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.884213 5108 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/021ddbaf-7df5-4911-afaa-609338cbcd9b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.885004 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021ddbaf-7df5-4911-afaa-609338cbcd9b-ready" (OuterVolumeSpecName: "ready") pod "021ddbaf-7df5-4911-afaa-609338cbcd9b" (UID: "021ddbaf-7df5-4911-afaa-609338cbcd9b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.885166 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/021ddbaf-7df5-4911-afaa-609338cbcd9b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "021ddbaf-7df5-4911-afaa-609338cbcd9b" (UID: "021ddbaf-7df5-4911-afaa-609338cbcd9b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.896496 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021ddbaf-7df5-4911-afaa-609338cbcd9b-kube-api-access-k5t8l" (OuterVolumeSpecName: "kube-api-access-k5t8l") pod "021ddbaf-7df5-4911-afaa-609338cbcd9b" (UID: "021ddbaf-7df5-4911-afaa-609338cbcd9b"). InnerVolumeSpecName "kube-api-access-k5t8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.985085 5108 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/021ddbaf-7df5-4911-afaa-609338cbcd9b-ready\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.985122 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k5t8l\" (UniqueName: \"kubernetes.io/projected/021ddbaf-7df5-4911-afaa-609338cbcd9b-kube-api-access-k5t8l\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:51 crc kubenswrapper[5108]: I0219 00:11:51.985135 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/021ddbaf-7df5-4911-afaa-609338cbcd9b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.199629 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.200284 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.266181 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.509034 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dhrv8_021ddbaf-7df5-4911-afaa-609338cbcd9b/kube-multus-additional-cni-plugins/0.log" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.509649 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.509747 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dhrv8" event={"ID":"021ddbaf-7df5-4911-afaa-609338cbcd9b","Type":"ContainerDied","Data":"5155ae8e7666412301b1d23cd2cde74c8fd35b9aaa92de011167d630877b2672"} Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.509834 5108 scope.go:117] "RemoveContainer" containerID="d96305ced255f7762959fe39a856b2616d1ee28e7377c82723af3141440bff53" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.554349 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lqwnl"] Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.562687 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dhrv8"] Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.571167 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dhrv8"] Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.582916 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.674553 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.675649 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.722080 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.772345 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-np88t"] Feb 19 00:11:52 crc kubenswrapper[5108]: I0219 00:11:52.772725 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-np88t" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerName="registry-server" containerID="cri-o://b5663ba63c709838a26639ae9dd26f72913463e4732ba3daa86117631cf183fa" gracePeriod=2 Feb 19 00:11:53 crc kubenswrapper[5108]: I0219 00:11:53.237874 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:53 crc kubenswrapper[5108]: I0219 00:11:53.308891 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:11:53 crc kubenswrapper[5108]: I0219 00:11:53.524536 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lqwnl" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerName="registry-server" containerID="cri-o://1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece" gracePeriod=2 Feb 19 00:11:53 crc kubenswrapper[5108]: I0219 00:11:53.580050 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:53 crc kubenswrapper[5108]: I0219 00:11:53.691907 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:53 crc kubenswrapper[5108]: I0219 00:11:53.746972 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:53 crc kubenswrapper[5108]: I0219 00:11:53.862099 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021ddbaf-7df5-4911-afaa-609338cbcd9b" path="/var/lib/kubelet/pods/021ddbaf-7df5-4911-afaa-609338cbcd9b/volumes" Feb 19 00:11:54 crc kubenswrapper[5108]: I0219 00:11:54.947588 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-csqkl"] Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.379467 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.441481 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-catalog-content\") pod \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.441688 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z22np\" (UniqueName: \"kubernetes.io/projected/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-kube-api-access-z22np\") pod \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.441755 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-utilities\") pod \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\" (UID: \"726b3fe7-f433-4a31-a1df-fd2aa1aacda4\") " Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.443364 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-utilities" (OuterVolumeSpecName: "utilities") pod "726b3fe7-f433-4a31-a1df-fd2aa1aacda4" (UID: "726b3fe7-f433-4a31-a1df-fd2aa1aacda4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.450620 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-kube-api-access-z22np" (OuterVolumeSpecName: "kube-api-access-z22np") pod "726b3fe7-f433-4a31-a1df-fd2aa1aacda4" (UID: "726b3fe7-f433-4a31-a1df-fd2aa1aacda4"). InnerVolumeSpecName "kube-api-access-z22np". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.493217 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "726b3fe7-f433-4a31-a1df-fd2aa1aacda4" (UID: "726b3fe7-f433-4a31-a1df-fd2aa1aacda4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.536808 5108 generic.go:358] "Generic (PLEG): container finished" podID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerID="b5663ba63c709838a26639ae9dd26f72913463e4732ba3daa86117631cf183fa" exitCode=0 Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.536951 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-np88t" event={"ID":"56dc0859-c6fc-47fd-ab9c-25e116306330","Type":"ContainerDied","Data":"b5663ba63c709838a26639ae9dd26f72913463e4732ba3daa86117631cf183fa"} Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.540377 5108 generic.go:358] "Generic (PLEG): container finished" podID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerID="1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece" exitCode=0 Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.540489 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqwnl" event={"ID":"726b3fe7-f433-4a31-a1df-fd2aa1aacda4","Type":"ContainerDied","Data":"1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece"} Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.540520 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqwnl" event={"ID":"726b3fe7-f433-4a31-a1df-fd2aa1aacda4","Type":"ContainerDied","Data":"a8a31d7675cc6ed8fbd7c7e06365552bec50815bf0e932e01e8b78f240c647e1"} Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.540539 5108 scope.go:117] "RemoveContainer" containerID="1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.540733 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqwnl" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.544180 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z22np\" (UniqueName: \"kubernetes.io/projected/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-kube-api-access-z22np\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.544225 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.544241 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726b3fe7-f433-4a31-a1df-fd2aa1aacda4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.563885 5108 scope.go:117] "RemoveContainer" containerID="9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.592347 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lqwnl"] Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.595997 5108 scope.go:117] "RemoveContainer" containerID="65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9" Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.596683 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lqwnl"] Feb 19 00:11:55 crc kubenswrapper[5108]: I0219 00:11:55.858544 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" path="/var/lib/kubelet/pods/726b3fe7-f433-4a31-a1df-fd2aa1aacda4/volumes" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.131367 5108 scope.go:117] "RemoveContainer" containerID="1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece" Feb 19 00:11:56 crc kubenswrapper[5108]: E0219 00:11:56.136064 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece\": container with ID starting with 1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece not found: ID does not exist" containerID="1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.136149 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece"} err="failed to get container status \"1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece\": rpc error: code = NotFound desc = could not find container \"1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece\": container with ID starting with 1d576735eec211d267cd0f2b571bcd7314db9384d403eb6a6b1c44b2957b5ece not found: ID does not exist" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.136232 5108 scope.go:117] "RemoveContainer" containerID="9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f" Feb 19 00:11:56 crc kubenswrapper[5108]: E0219 00:11:56.137576 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f\": container with ID starting with 9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f not found: ID does not exist" containerID="9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.137660 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f"} err="failed to get container status \"9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f\": rpc error: code = NotFound desc = could not find container \"9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f\": container with ID starting with 9ae5ffb2fb8c8efc372eea2933d77102981ca89ce2d850339c942f5fd4f1543f not found: ID does not exist" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.137707 5108 scope.go:117] "RemoveContainer" containerID="65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9" Feb 19 00:11:56 crc kubenswrapper[5108]: E0219 00:11:56.138186 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9\": container with ID starting with 65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9 not found: ID does not exist" containerID="65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.138226 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9"} err="failed to get container status \"65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9\": rpc error: code = NotFound desc = could not find container \"65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9\": container with ID starting with 65b386af343061ebfe339df6662b3c8a4bc901d82f86377e4479a9cf3dc7cbf9 not found: ID does not exist" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.426127 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.547757 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-np88t" event={"ID":"56dc0859-c6fc-47fd-ab9c-25e116306330","Type":"ContainerDied","Data":"11f2fe8c04bb7a3b8eed82003e1261717c532ff2686dee85c5406417e292e39d"} Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.547815 5108 scope.go:117] "RemoveContainer" containerID="b5663ba63c709838a26639ae9dd26f72913463e4732ba3daa86117631cf183fa" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.547820 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-np88t" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.549430 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-csqkl" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerName="registry-server" containerID="cri-o://e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4" gracePeriod=2 Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.569097 5108 scope.go:117] "RemoveContainer" containerID="6733933b926060d269425f4cdb6afed8c55843109e9c1cf0c40d68d452b9b7c2" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.573792 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d6q4\" (UniqueName: \"kubernetes.io/projected/56dc0859-c6fc-47fd-ab9c-25e116306330-kube-api-access-9d6q4\") pod \"56dc0859-c6fc-47fd-ab9c-25e116306330\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.573858 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-catalog-content\") pod \"56dc0859-c6fc-47fd-ab9c-25e116306330\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.573885 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-utilities\") pod \"56dc0859-c6fc-47fd-ab9c-25e116306330\" (UID: \"56dc0859-c6fc-47fd-ab9c-25e116306330\") " Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.575515 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-utilities" (OuterVolumeSpecName: "utilities") pod "56dc0859-c6fc-47fd-ab9c-25e116306330" (UID: "56dc0859-c6fc-47fd-ab9c-25e116306330"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.581448 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dc0859-c6fc-47fd-ab9c-25e116306330-kube-api-access-9d6q4" (OuterVolumeSpecName: "kube-api-access-9d6q4") pod "56dc0859-c6fc-47fd-ab9c-25e116306330" (UID: "56dc0859-c6fc-47fd-ab9c-25e116306330"). InnerVolumeSpecName "kube-api-access-9d6q4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.583424 5108 scope.go:117] "RemoveContainer" containerID="e206af99fb789eaaea7085dd13e69e3f9f9860f2a16597d83050023d2a4eaf56" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.615023 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56dc0859-c6fc-47fd-ab9c-25e116306330" (UID: "56dc0859-c6fc-47fd-ab9c-25e116306330"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.675728 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9d6q4\" (UniqueName: \"kubernetes.io/projected/56dc0859-c6fc-47fd-ab9c-25e116306330-kube-api-access-9d6q4\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.675765 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.675773 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dc0859-c6fc-47fd-ab9c-25e116306330-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.869607 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.884323 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-np88t"] Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.887410 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-np88t"] Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.981846 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-utilities\") pod \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.982219 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-catalog-content\") pod \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.982338 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9ppx\" (UniqueName: \"kubernetes.io/projected/70754f09-86a2-4b82-b04c-72dc6aa70b7b-kube-api-access-k9ppx\") pod \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\" (UID: \"70754f09-86a2-4b82-b04c-72dc6aa70b7b\") " Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.983440 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-utilities" (OuterVolumeSpecName: "utilities") pod "70754f09-86a2-4b82-b04c-72dc6aa70b7b" (UID: "70754f09-86a2-4b82-b04c-72dc6aa70b7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.984958 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70754f09-86a2-4b82-b04c-72dc6aa70b7b-kube-api-access-k9ppx" (OuterVolumeSpecName: "kube-api-access-k9ppx") pod "70754f09-86a2-4b82-b04c-72dc6aa70b7b" (UID: "70754f09-86a2-4b82-b04c-72dc6aa70b7b"). InnerVolumeSpecName "kube-api-access-k9ppx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:56 crc kubenswrapper[5108]: I0219 00:11:56.995177 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70754f09-86a2-4b82-b04c-72dc6aa70b7b" (UID: "70754f09-86a2-4b82-b04c-72dc6aa70b7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.083807 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.083849 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k9ppx\" (UniqueName: \"kubernetes.io/projected/70754f09-86a2-4b82-b04c-72dc6aa70b7b-kube-api-access-k9ppx\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.083860 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70754f09-86a2-4b82-b04c-72dc6aa70b7b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.344546 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98tbv"] Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.345027 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-98tbv" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="registry-server" containerID="cri-o://649578f264c6cee6e689175ede93e92ddb662f1492a5db22dc2bf4f3a33489d0" gracePeriod=2 Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.564514 5108 generic.go:358] "Generic (PLEG): container finished" podID="9656b253-82c5-4759-acc7-a885d8757845" containerID="649578f264c6cee6e689175ede93e92ddb662f1492a5db22dc2bf4f3a33489d0" exitCode=0 Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.564606 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98tbv" event={"ID":"9656b253-82c5-4759-acc7-a885d8757845","Type":"ContainerDied","Data":"649578f264c6cee6e689175ede93e92ddb662f1492a5db22dc2bf4f3a33489d0"} Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.567307 5108 generic.go:358] "Generic (PLEG): container finished" podID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerID="e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4" exitCode=0 Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.567493 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csqkl" event={"ID":"70754f09-86a2-4b82-b04c-72dc6aa70b7b","Type":"ContainerDied","Data":"e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4"} Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.567524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csqkl" event={"ID":"70754f09-86a2-4b82-b04c-72dc6aa70b7b","Type":"ContainerDied","Data":"a1a62e1225b0a64b4aaf965c13e1a50bbbb140d69e1a5d9dcae782ed752bd88b"} Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.567546 5108 scope.go:117] "RemoveContainer" containerID="e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.567678 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-csqkl" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.589721 5108 scope.go:117] "RemoveContainer" containerID="63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.623391 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-csqkl"] Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.628517 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-csqkl"] Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.640947 5108 scope.go:117] "RemoveContainer" containerID="f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.700094 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.701708 5108 scope.go:117] "RemoveContainer" containerID="e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4" Feb 19 00:11:57 crc kubenswrapper[5108]: E0219 00:11:57.701953 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4\": container with ID starting with e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4 not found: ID does not exist" containerID="e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.701979 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4"} err="failed to get container status \"e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4\": rpc error: code = NotFound desc = could not find container \"e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4\": container with ID starting with e73acc4f3e0b5f7f2b08d6a9454a70abb10c4ab36865eaae12f5f0bc1e3dc5b4 not found: ID does not exist" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.701996 5108 scope.go:117] "RemoveContainer" containerID="63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766" Feb 19 00:11:57 crc kubenswrapper[5108]: E0219 00:11:57.702160 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766\": container with ID starting with 63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766 not found: ID does not exist" containerID="63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.702188 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766"} err="failed to get container status \"63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766\": rpc error: code = NotFound desc = could not find container \"63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766\": container with ID starting with 63bec879f663bf4788e726a4080b8201a4334aa78eaf916c9dd03b19e8d39766 not found: ID does not exist" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.702200 5108 scope.go:117] "RemoveContainer" containerID="f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9" Feb 19 00:11:57 crc kubenswrapper[5108]: E0219 00:11:57.702352 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9\": container with ID starting with f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9 not found: ID does not exist" containerID="f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.702367 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9"} err="failed to get container status \"f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9\": rpc error: code = NotFound desc = could not find container \"f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9\": container with ID starting with f4dc183bbfb8a844d769ae8ab0efcb60715ff1a274c0fa50cc5f4d8a1738c5f9 not found: ID does not exist" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.791087 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lppr5\" (UniqueName: \"kubernetes.io/projected/9656b253-82c5-4759-acc7-a885d8757845-kube-api-access-lppr5\") pod \"9656b253-82c5-4759-acc7-a885d8757845\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.791258 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-utilities\") pod \"9656b253-82c5-4759-acc7-a885d8757845\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.791377 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-catalog-content\") pod \"9656b253-82c5-4759-acc7-a885d8757845\" (UID: \"9656b253-82c5-4759-acc7-a885d8757845\") " Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.793810 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-utilities" (OuterVolumeSpecName: "utilities") pod "9656b253-82c5-4759-acc7-a885d8757845" (UID: "9656b253-82c5-4759-acc7-a885d8757845"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.797334 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9656b253-82c5-4759-acc7-a885d8757845-kube-api-access-lppr5" (OuterVolumeSpecName: "kube-api-access-lppr5") pod "9656b253-82c5-4759-acc7-a885d8757845" (UID: "9656b253-82c5-4759-acc7-a885d8757845"). InnerVolumeSpecName "kube-api-access-lppr5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.854444 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" path="/var/lib/kubelet/pods/56dc0859-c6fc-47fd-ab9c-25e116306330/volumes" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.855100 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" path="/var/lib/kubelet/pods/70754f09-86a2-4b82-b04c-72dc6aa70b7b/volumes" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.892857 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.892902 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lppr5\" (UniqueName: \"kubernetes.io/projected/9656b253-82c5-4759-acc7-a885d8757845-kube-api-access-lppr5\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.908507 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9656b253-82c5-4759-acc7-a885d8757845" (UID: "9656b253-82c5-4759-acc7-a885d8757845"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:11:57 crc kubenswrapper[5108]: I0219 00:11:57.994520 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9656b253-82c5-4759-acc7-a885d8757845-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.579250 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98tbv" event={"ID":"9656b253-82c5-4759-acc7-a885d8757845","Type":"ContainerDied","Data":"914ca979bc74c9f2c46912a7aa5088df1c959ebbe9f1f219d5d6b7e0f726f01b"} Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.579278 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98tbv" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.579314 5108 scope.go:117] "RemoveContainer" containerID="649578f264c6cee6e689175ede93e92ddb662f1492a5db22dc2bf4f3a33489d0" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.598954 5108 scope.go:117] "RemoveContainer" containerID="52a01072ea5a1b4e5bacf104835c771dd19633f065cb8769166441cc9bc2f1b9" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.614511 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98tbv"] Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.617898 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-98tbv"] Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.623264 5108 scope.go:117] "RemoveContainer" containerID="ce6c49aa7499bf317b5f76b5f0a57d15303008e4c2bbf41d875c4d2b5549de45" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.712492 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713211 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerName="extract-utilities" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713233 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerName="extract-utilities" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713248 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713254 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713261 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42cf11b0-c684-4732-a90f-08e028c943ef" containerName="pruner" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713266 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cf11b0-c684-4732-a90f-08e028c943ef" containerName="pruner" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713274 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerName="extract-content" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713296 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerName="extract-content" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713303 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="extract-content" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713310 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="extract-content" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713316 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713321 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713330 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="587cab42-53b6-4b3f-a6a2-7fb27f5a8427" containerName="pruner" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713335 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="587cab42-53b6-4b3f-a6a2-7fb27f5a8427" containerName="pruner" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713342 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713347 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713355 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="021ddbaf-7df5-4911-afaa-609338cbcd9b" containerName="kube-multus-additional-cni-plugins" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713361 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="021ddbaf-7df5-4911-afaa-609338cbcd9b" containerName="kube-multus-additional-cni-plugins" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713372 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerName="extract-utilities" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713378 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerName="extract-utilities" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713388 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="extract-utilities" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713393 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="extract-utilities" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713407 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerName="extract-content" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713412 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerName="extract-content" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713420 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerName="extract-utilities" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713425 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerName="extract-utilities" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713431 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerName="extract-content" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713437 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerName="extract-content" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713445 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713451 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713543 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="9656b253-82c5-4759-acc7-a885d8757845" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713554 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="587cab42-53b6-4b3f-a6a2-7fb27f5a8427" containerName="pruner" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713564 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="70754f09-86a2-4b82-b04c-72dc6aa70b7b" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713572 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="021ddbaf-7df5-4911-afaa-609338cbcd9b" containerName="kube-multus-additional-cni-plugins" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713579 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="42cf11b0-c684-4732-a90f-08e028c943ef" containerName="pruner" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713591 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="726b3fe7-f433-4a31-a1df-fd2aa1aacda4" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.713598 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="56dc0859-c6fc-47fd-ab9c-25e116306330" containerName="registry-server" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.727915 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.728208 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.730504 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.732471 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.808356 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39d3a24b-6c0f-4943-ad07-35039a5124b9-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"39d3a24b-6c0f-4943-ad07-35039a5124b9\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.808747 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39d3a24b-6c0f-4943-ad07-35039a5124b9-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"39d3a24b-6c0f-4943-ad07-35039a5124b9\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.909883 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39d3a24b-6c0f-4943-ad07-35039a5124b9-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"39d3a24b-6c0f-4943-ad07-35039a5124b9\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.910012 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39d3a24b-6c0f-4943-ad07-35039a5124b9-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"39d3a24b-6c0f-4943-ad07-35039a5124b9\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.910082 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39d3a24b-6c0f-4943-ad07-35039a5124b9-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"39d3a24b-6c0f-4943-ad07-35039a5124b9\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:11:58 crc kubenswrapper[5108]: I0219 00:11:58.933822 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39d3a24b-6c0f-4943-ad07-35039a5124b9-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"39d3a24b-6c0f-4943-ad07-35039a5124b9\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:11:59 crc kubenswrapper[5108]: I0219 00:11:59.070203 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:11:59 crc kubenswrapper[5108]: I0219 00:11:59.526704 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 19 00:11:59 crc kubenswrapper[5108]: I0219 00:11:59.589313 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"39d3a24b-6c0f-4943-ad07-35039a5124b9","Type":"ContainerStarted","Data":"05fa95ed5231cd7fee9b491cc0e15bf416ba700c84cde5b32a7a29ee09e15af2"} Feb 19 00:11:59 crc kubenswrapper[5108]: I0219 00:11:59.859691 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9656b253-82c5-4759-acc7-a885d8757845" path="/var/lib/kubelet/pods/9656b253-82c5-4759-acc7-a885d8757845/volumes" Feb 19 00:12:00 crc kubenswrapper[5108]: I0219 00:12:00.602164 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"39d3a24b-6c0f-4943-ad07-35039a5124b9","Type":"ContainerStarted","Data":"166069c37aa346ac430498e6d9fc448e100eedc9b5ac09d032ca79a3acbc7686"} Feb 19 00:12:00 crc kubenswrapper[5108]: I0219 00:12:00.615551 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.615520393 podStartE2EDuration="2.615520393s" podCreationTimestamp="2026-02-19 00:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:00.614359492 +0000 UTC m=+179.581005800" watchObservedRunningTime="2026-02-19 00:12:00.615520393 +0000 UTC m=+179.582166721" Feb 19 00:12:01 crc kubenswrapper[5108]: I0219 00:12:01.611823 5108 generic.go:358] "Generic (PLEG): container finished" podID="39d3a24b-6c0f-4943-ad07-35039a5124b9" containerID="166069c37aa346ac430498e6d9fc448e100eedc9b5ac09d032ca79a3acbc7686" exitCode=0 Feb 19 00:12:01 crc kubenswrapper[5108]: I0219 00:12:01.611927 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"39d3a24b-6c0f-4943-ad07-35039a5124b9","Type":"ContainerDied","Data":"166069c37aa346ac430498e6d9fc448e100eedc9b5ac09d032ca79a3acbc7686"} Feb 19 00:12:01 crc kubenswrapper[5108]: I0219 00:12:01.973527 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56674: no serving certificate available for the kubelet" Feb 19 00:12:02 crc kubenswrapper[5108]: I0219 00:12:02.812455 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:02 crc kubenswrapper[5108]: I0219 00:12:02.865851 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39d3a24b-6c0f-4943-ad07-35039a5124b9-kubelet-dir\") pod \"39d3a24b-6c0f-4943-ad07-35039a5124b9\" (UID: \"39d3a24b-6c0f-4943-ad07-35039a5124b9\") " Feb 19 00:12:02 crc kubenswrapper[5108]: I0219 00:12:02.866235 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d3a24b-6c0f-4943-ad07-35039a5124b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "39d3a24b-6c0f-4943-ad07-35039a5124b9" (UID: "39d3a24b-6c0f-4943-ad07-35039a5124b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:02 crc kubenswrapper[5108]: I0219 00:12:02.866380 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39d3a24b-6c0f-4943-ad07-35039a5124b9-kube-api-access\") pod \"39d3a24b-6c0f-4943-ad07-35039a5124b9\" (UID: \"39d3a24b-6c0f-4943-ad07-35039a5124b9\") " Feb 19 00:12:02 crc kubenswrapper[5108]: I0219 00:12:02.867874 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39d3a24b-6c0f-4943-ad07-35039a5124b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:02 crc kubenswrapper[5108]: I0219 00:12:02.883131 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d3a24b-6c0f-4943-ad07-35039a5124b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "39d3a24b-6c0f-4943-ad07-35039a5124b9" (UID: "39d3a24b-6c0f-4943-ad07-35039a5124b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:02 crc kubenswrapper[5108]: I0219 00:12:02.969916 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39d3a24b-6c0f-4943-ad07-35039a5124b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.500471 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.501221 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39d3a24b-6c0f-4943-ad07-35039a5124b9" containerName="pruner" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.501246 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d3a24b-6c0f-4943-ad07-35039a5124b9" containerName="pruner" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.501357 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="39d3a24b-6c0f-4943-ad07-35039a5124b9" containerName="pruner" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.508322 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.509974 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.576922 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.577374 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c83199-6793-49da-834a-e14fa7b0488c-kube-api-access\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.577594 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-var-lock\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.623850 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"39d3a24b-6c0f-4943-ad07-35039a5124b9","Type":"ContainerDied","Data":"05fa95ed5231cd7fee9b491cc0e15bf416ba700c84cde5b32a7a29ee09e15af2"} Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.623893 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05fa95ed5231cd7fee9b491cc0e15bf416ba700c84cde5b32a7a29ee09e15af2" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.624019 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.679370 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.679435 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c83199-6793-49da-834a-e14fa7b0488c-kube-api-access\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.679523 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-var-lock\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.679533 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.679611 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-var-lock\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.707682 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c83199-6793-49da-834a-e14fa7b0488c-kube-api-access\") pod \"installer-12-crc\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:03 crc kubenswrapper[5108]: I0219 00:12:03.824125 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:04 crc kubenswrapper[5108]: I0219 00:12:04.035383 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 19 00:12:04 crc kubenswrapper[5108]: I0219 00:12:04.632168 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"03c83199-6793-49da-834a-e14fa7b0488c","Type":"ContainerStarted","Data":"0ff56e2b92e6de9b315e5d613f7e2d05ef0801ea48f03b1fa9043abe00a4cd4f"} Feb 19 00:12:04 crc kubenswrapper[5108]: I0219 00:12:04.632492 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"03c83199-6793-49da-834a-e14fa7b0488c","Type":"ContainerStarted","Data":"dcc5e19035de10a922540fd52a8c36419fc4bdbd48343296b9517c767af1b03d"} Feb 19 00:12:04 crc kubenswrapper[5108]: I0219 00:12:04.653488 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=1.653466006 podStartE2EDuration="1.653466006s" podCreationTimestamp="2026-02-19 00:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:04.651429682 +0000 UTC m=+183.618075990" watchObservedRunningTime="2026-02-19 00:12:04.653466006 +0000 UTC m=+183.620112314" Feb 19 00:12:11 crc kubenswrapper[5108]: I0219 00:12:11.970532 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-hhd9x"] Feb 19 00:12:15 crc kubenswrapper[5108]: I0219 00:12:15.459448 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.019445 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" podUID="b5775541-9300-4451-95dd-cb81bd25dd50" containerName="oauth-openshift" containerID="cri-o://0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e" gracePeriod=15 Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.485463 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.544902 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-85dc74b4f9-b5svx"] Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.545719 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b5775541-9300-4451-95dd-cb81bd25dd50" containerName="oauth-openshift" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.545746 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5775541-9300-4451-95dd-cb81bd25dd50" containerName="oauth-openshift" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.545929 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b5775541-9300-4451-95dd-cb81bd25dd50" containerName="oauth-openshift" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.549358 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85dc74b4f9-b5svx"] Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.549519 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.581687 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5775541-9300-4451-95dd-cb81bd25dd50-audit-dir\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.581753 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-idp-0-file-data\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.581781 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-provider-selection\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.581808 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5775541-9300-4451-95dd-cb81bd25dd50-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.581829 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-cliconfig\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582032 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-login\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582079 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-error\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582121 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-ocp-branding-template\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582172 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-router-certs\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582204 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-serving-cert\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582236 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-session\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582268 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q42td\" (UniqueName: \"kubernetes.io/projected/b5775541-9300-4451-95dd-cb81bd25dd50-kube-api-access-q42td\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582299 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-trusted-ca-bundle\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582332 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-service-ca\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582357 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-audit-policies\") pod \"b5775541-9300-4451-95dd-cb81bd25dd50\" (UID: \"b5775541-9300-4451-95dd-cb81bd25dd50\") " Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582471 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582630 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.582663 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5775541-9300-4451-95dd-cb81bd25dd50-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.583340 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.583372 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.583695 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.589170 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.589231 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5775541-9300-4451-95dd-cb81bd25dd50-kube-api-access-q42td" (OuterVolumeSpecName: "kube-api-access-q42td") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "kube-api-access-q42td". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.589564 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.589726 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.590610 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.590913 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.591159 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.595255 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.595465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b5775541-9300-4451-95dd-cb81bd25dd50" (UID: "b5775541-9300-4451-95dd-cb81bd25dd50"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.684335 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.684415 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-session\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.684440 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.684743 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.684842 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-error\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.684872 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzw9\" (UniqueName: \"kubernetes.io/projected/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-kube-api-access-jxzw9\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685201 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685315 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-audit-dir\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685418 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685512 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685625 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685731 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-login\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685785 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-audit-policies\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685837 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685851 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685863 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685876 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q42td\" (UniqueName: \"kubernetes.io/projected/b5775541-9300-4451-95dd-cb81bd25dd50-kube-api-access-q42td\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685891 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685903 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685915 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5775541-9300-4451-95dd-cb81bd25dd50-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685927 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685977 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.685989 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.686001 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.686013 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5775541-9300-4451-95dd-cb81bd25dd50-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787358 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787435 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787470 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-audit-dir\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787505 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787538 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787575 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787609 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-login\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787670 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-audit-policies\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787756 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-session\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787790 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787857 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787955 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-error\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.787994 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jxzw9\" (UniqueName: \"kubernetes.io/projected/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-kube-api-access-jxzw9\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.789351 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.789682 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-audit-policies\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.789879 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-audit-dir\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.790009 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.790507 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.794181 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-error\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.794365 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-session\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.794419 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-login\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.795048 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.795192 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.795337 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.795513 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.796572 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.807447 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxzw9\" (UniqueName: \"kubernetes.io/projected/65274ddb-7d7b-4ba5-8d17-6676b62fc4ed-kube-api-access-jxzw9\") pod \"oauth-openshift-85dc74b4f9-b5svx\" (UID: \"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed\") " pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.868369 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.871765 5108 generic.go:358] "Generic (PLEG): container finished" podID="b5775541-9300-4451-95dd-cb81bd25dd50" containerID="0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e" exitCode=0 Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.871885 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" event={"ID":"b5775541-9300-4451-95dd-cb81bd25dd50","Type":"ContainerDied","Data":"0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e"} Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.872072 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" event={"ID":"b5775541-9300-4451-95dd-cb81bd25dd50","Type":"ContainerDied","Data":"3bb9b7133ad080274f0d30bd164c58934f68f5711d34aa25c60fe4320840d379"} Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.872124 5108 scope.go:117] "RemoveContainer" containerID="0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.872136 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-hhd9x" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.900600 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-hhd9x"] Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.903832 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-hhd9x"] Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.917586 5108 scope.go:117] "RemoveContainer" containerID="0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e" Feb 19 00:12:37 crc kubenswrapper[5108]: E0219 00:12:37.917958 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e\": container with ID starting with 0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e not found: ID does not exist" containerID="0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e" Feb 19 00:12:37 crc kubenswrapper[5108]: I0219 00:12:37.917984 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e"} err="failed to get container status \"0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e\": rpc error: code = NotFound desc = could not find container \"0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e\": container with ID starting with 0ad0997e77f0a23c7dcb1656000e39ee54a229ab92d497667667a42b457ecd5e not found: ID does not exist" Feb 19 00:12:38 crc kubenswrapper[5108]: I0219 00:12:38.150382 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85dc74b4f9-b5svx"] Feb 19 00:12:38 crc kubenswrapper[5108]: I0219 00:12:38.877837 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" event={"ID":"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed","Type":"ContainerStarted","Data":"37606e50f43188a5ad81a78e613a829af0bb4a3c49ea7232b2ee8fd07c1863d4"} Feb 19 00:12:38 crc kubenswrapper[5108]: I0219 00:12:38.877878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" event={"ID":"65274ddb-7d7b-4ba5-8d17-6676b62fc4ed","Type":"ContainerStarted","Data":"bde45c8fb8cb186fe155922a7ea7f6b3708eb7192fadde5b68755021904b4788"} Feb 19 00:12:38 crc kubenswrapper[5108]: I0219 00:12:38.878351 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:38 crc kubenswrapper[5108]: I0219 00:12:38.910307 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" podStartSLOduration=27.910235898 podStartE2EDuration="27.910235898s" podCreationTimestamp="2026-02-19 00:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:38.908088318 +0000 UTC m=+217.874734616" watchObservedRunningTime="2026-02-19 00:12:38.910235898 +0000 UTC m=+217.876882246" Feb 19 00:12:39 crc kubenswrapper[5108]: I0219 00:12:39.256457 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85dc74b4f9-b5svx" Feb 19 00:12:39 crc kubenswrapper[5108]: I0219 00:12:39.862575 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5775541-9300-4451-95dd-cb81bd25dd50" path="/var/lib/kubelet/pods/b5775541-9300-4451-95dd-cb81bd25dd50/volumes" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.473543 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.491266 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.491551 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.491684 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.492769 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439" gracePeriod=15 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.492855 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa" gracePeriod=15 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493448 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.492851 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe" gracePeriod=15 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.492976 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9" gracePeriod=15 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493011 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7" gracePeriod=15 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493486 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493636 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493655 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493674 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493688 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493704 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493719 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493749 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493766 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493799 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493817 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493848 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493863 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493884 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.493899 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.494640 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.494678 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.494705 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.494724 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.494744 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.494767 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.494794 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.494813 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.495065 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.495086 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.496336 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.496626 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.496650 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.503097 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.548922 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.559417 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.559515 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.559607 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.559848 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.559968 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.660923 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.661021 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.661054 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.661233 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.661319 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.661344 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.661379 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.661860 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.662003 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.662047 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.662038 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.662298 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.662374 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.662385 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.662413 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.763440 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.763507 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.763586 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.763607 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.763635 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.763660 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.763762 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.763911 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.764382 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.764592 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.919333 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.921285 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.922321 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa" exitCode=0 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.922362 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe" exitCode=0 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.922377 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9" exitCode=0 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.922396 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7" exitCode=2 Feb 19 00:12:42 crc kubenswrapper[5108]: I0219 00:12:42.922450 5108 scope.go:117] "RemoveContainer" containerID="822e49a3aba7546e96ad77fc32126b06b2b9ea84dd41a260abfa049408b88210" Feb 19 00:12:43 crc kubenswrapper[5108]: I0219 00:12:43.932660 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.903342 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.904470 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.905068 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.942286 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.943109 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439" exitCode=0 Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.943202 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.943297 5108 scope.go:117] "RemoveContainer" containerID="3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.955431 5108 scope.go:117] "RemoveContainer" containerID="ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.978597 5108 scope.go:117] "RemoveContainer" containerID="b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9" Feb 19 00:12:44 crc kubenswrapper[5108]: I0219 00:12:44.994038 5108 scope.go:117] "RemoveContainer" containerID="d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.009346 5108 scope.go:117] "RemoveContainer" containerID="2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.018222 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.018301 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.018528 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.018695 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.018770 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.020191 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.020468 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.020740 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.020809 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.023355 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.027319 5108 scope.go:117] "RemoveContainer" containerID="b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.094763 5108 scope.go:117] "RemoveContainer" containerID="3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa" Feb 19 00:12:45 crc kubenswrapper[5108]: E0219 00:12:45.095205 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa\": container with ID starting with 3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa not found: ID does not exist" containerID="3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.095259 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa"} err="failed to get container status \"3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa\": rpc error: code = NotFound desc = could not find container \"3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa\": container with ID starting with 3087a7987aabe5e6ef1c2563b5bf3cae58259ff107b9dbf6154c5735b9b62daa not found: ID does not exist" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.095289 5108 scope.go:117] "RemoveContainer" containerID="ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe" Feb 19 00:12:45 crc kubenswrapper[5108]: E0219 00:12:45.095607 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\": container with ID starting with ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe not found: ID does not exist" containerID="ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.095650 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe"} err="failed to get container status \"ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\": rpc error: code = NotFound desc = could not find container \"ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe\": container with ID starting with ef323b110b97979b2d8edbe416bc98092aafad1e570437e28a12060f0526aebe not found: ID does not exist" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.095673 5108 scope.go:117] "RemoveContainer" containerID="b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9" Feb 19 00:12:45 crc kubenswrapper[5108]: E0219 00:12:45.095925 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\": container with ID starting with b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9 not found: ID does not exist" containerID="b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.096000 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9"} err="failed to get container status \"b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\": rpc error: code = NotFound desc = could not find container \"b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9\": container with ID starting with b7c4ceeedfe9c9d3ede1fbae456ba89b71694877a872189251fa234d814ef7e9 not found: ID does not exist" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.096027 5108 scope.go:117] "RemoveContainer" containerID="d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7" Feb 19 00:12:45 crc kubenswrapper[5108]: E0219 00:12:45.096259 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\": container with ID starting with d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7 not found: ID does not exist" containerID="d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.096311 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7"} err="failed to get container status \"d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\": rpc error: code = NotFound desc = could not find container \"d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7\": container with ID starting with d8b8dbac5a39e503fb0d8288e1d9b85cf0f9ac5f3ffa7ad69929720b268467f7 not found: ID does not exist" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.096334 5108 scope.go:117] "RemoveContainer" containerID="2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439" Feb 19 00:12:45 crc kubenswrapper[5108]: E0219 00:12:45.096605 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\": container with ID starting with 2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439 not found: ID does not exist" containerID="2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.096640 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439"} err="failed to get container status \"2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\": rpc error: code = NotFound desc = could not find container \"2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439\": container with ID starting with 2f33764333ddfb6fb0050ddeb59b84a7d9c368486dbaa8c849710c7347ea2439 not found: ID does not exist" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.096662 5108 scope.go:117] "RemoveContainer" containerID="b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9" Feb 19 00:12:45 crc kubenswrapper[5108]: E0219 00:12:45.096904 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\": container with ID starting with b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9 not found: ID does not exist" containerID="b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.096994 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9"} err="failed to get container status \"b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\": rpc error: code = NotFound desc = could not find container \"b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9\": container with ID starting with b9d1fe379b58166e813cbb6fb5bfa29ee4be0e49b1570f92707340174dd51ec9 not found: ID does not exist" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.120828 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.120864 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.120876 5108 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.120889 5108 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.120899 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.265712 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:45 crc kubenswrapper[5108]: I0219 00:12:45.860242 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 19 00:12:46 crc kubenswrapper[5108]: E0219 00:12:46.792748 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:46 crc kubenswrapper[5108]: E0219 00:12:46.794341 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:46 crc kubenswrapper[5108]: E0219 00:12:46.795342 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:46 crc kubenswrapper[5108]: E0219 00:12:46.795884 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:46 crc kubenswrapper[5108]: E0219 00:12:46.796563 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:46 crc kubenswrapper[5108]: I0219 00:12:46.796651 5108 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 00:12:46 crc kubenswrapper[5108]: E0219 00:12:46.797262 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Feb 19 00:12:46 crc kubenswrapper[5108]: E0219 00:12:46.998598 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Feb 19 00:12:47 crc kubenswrapper[5108]: E0219 00:12:47.400365 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Feb 19 00:12:47 crc kubenswrapper[5108]: E0219 00:12:47.551277 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:47 crc kubenswrapper[5108]: I0219 00:12:47.552006 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:47 crc kubenswrapper[5108]: E0219 00:12:47.588669 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d7475e53e1b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:12:47.587794459 +0000 UTC m=+226.554440797,LastTimestamp:2026-02-19 00:12:47.587794459 +0000 UTC m=+226.554440797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:12:47 crc kubenswrapper[5108]: E0219 00:12:47.958902 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d7475e53e1b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:12:47.587794459 +0000 UTC m=+226.554440797,LastTimestamp:2026-02-19 00:12:47.587794459 +0000 UTC m=+226.554440797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:12:47 crc kubenswrapper[5108]: I0219 00:12:47.971592 5108 generic.go:358] "Generic (PLEG): container finished" podID="03c83199-6793-49da-834a-e14fa7b0488c" containerID="0ff56e2b92e6de9b315e5d613f7e2d05ef0801ea48f03b1fa9043abe00a4cd4f" exitCode=0 Feb 19 00:12:47 crc kubenswrapper[5108]: I0219 00:12:47.971672 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"03c83199-6793-49da-834a-e14fa7b0488c","Type":"ContainerDied","Data":"0ff56e2b92e6de9b315e5d613f7e2d05ef0801ea48f03b1fa9043abe00a4cd4f"} Feb 19 00:12:47 crc kubenswrapper[5108]: I0219 00:12:47.972506 5108 status_manager.go:895] "Failed to get status for pod" podUID="03c83199-6793-49da-834a-e14fa7b0488c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:47 crc kubenswrapper[5108]: I0219 00:12:47.973976 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b"} Feb 19 00:12:47 crc kubenswrapper[5108]: I0219 00:12:47.974051 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"3d71fb06931f7245e9f0b432fd7d2be3bb28fd9784371e3390b7304e88118146"} Feb 19 00:12:47 crc kubenswrapper[5108]: I0219 00:12:47.974560 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:47 crc kubenswrapper[5108]: I0219 00:12:47.974610 5108 status_manager.go:895] "Failed to get status for pod" podUID="03c83199-6793-49da-834a-e14fa7b0488c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:47 crc kubenswrapper[5108]: E0219 00:12:47.975506 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:48 crc kubenswrapper[5108]: E0219 00:12:48.202971 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.312082 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.312900 5108 status_manager.go:895] "Failed to get status for pod" podUID="03c83199-6793-49da-834a-e14fa7b0488c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.379677 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c83199-6793-49da-834a-e14fa7b0488c-kube-api-access\") pod \"03c83199-6793-49da-834a-e14fa7b0488c\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.379883 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-var-lock\") pod \"03c83199-6793-49da-834a-e14fa7b0488c\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.379926 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-kubelet-dir\") pod \"03c83199-6793-49da-834a-e14fa7b0488c\" (UID: \"03c83199-6793-49da-834a-e14fa7b0488c\") " Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.380149 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-var-lock" (OuterVolumeSpecName: "var-lock") pod "03c83199-6793-49da-834a-e14fa7b0488c" (UID: "03c83199-6793-49da-834a-e14fa7b0488c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.380183 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03c83199-6793-49da-834a-e14fa7b0488c" (UID: "03c83199-6793-49da-834a-e14fa7b0488c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.380487 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.380528 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c83199-6793-49da-834a-e14fa7b0488c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.389482 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c83199-6793-49da-834a-e14fa7b0488c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03c83199-6793-49da-834a-e14fa7b0488c" (UID: "03c83199-6793-49da-834a-e14fa7b0488c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:49 crc kubenswrapper[5108]: I0219 00:12:49.481990 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c83199-6793-49da-834a-e14fa7b0488c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:49 crc kubenswrapper[5108]: E0219 00:12:49.804721 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Feb 19 00:12:50 crc kubenswrapper[5108]: I0219 00:12:50.001750 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:50 crc kubenswrapper[5108]: I0219 00:12:50.001794 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"03c83199-6793-49da-834a-e14fa7b0488c","Type":"ContainerDied","Data":"dcc5e19035de10a922540fd52a8c36419fc4bdbd48343296b9517c767af1b03d"} Feb 19 00:12:50 crc kubenswrapper[5108]: I0219 00:12:50.002818 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc5e19035de10a922540fd52a8c36419fc4bdbd48343296b9517c767af1b03d" Feb 19 00:12:50 crc kubenswrapper[5108]: I0219 00:12:50.010766 5108 status_manager.go:895] "Failed to get status for pod" podUID="03c83199-6793-49da-834a-e14fa7b0488c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:50 crc kubenswrapper[5108]: E0219 00:12:50.895106 5108 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" volumeName="registry-storage" Feb 19 00:12:51 crc kubenswrapper[5108]: I0219 00:12:51.854564 5108 status_manager.go:895] "Failed to get status for pod" podUID="03c83199-6793-49da-834a-e14fa7b0488c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:53 crc kubenswrapper[5108]: E0219 00:12:53.005798 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="6.4s" Feb 19 00:12:54 crc kubenswrapper[5108]: I0219 00:12:54.848185 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:54 crc kubenswrapper[5108]: I0219 00:12:54.849194 5108 status_manager.go:895] "Failed to get status for pod" podUID="03c83199-6793-49da-834a-e14fa7b0488c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:54 crc kubenswrapper[5108]: I0219 00:12:54.872571 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:12:54 crc kubenswrapper[5108]: I0219 00:12:54.872621 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:12:54 crc kubenswrapper[5108]: E0219 00:12:54.872980 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:54 crc kubenswrapper[5108]: I0219 00:12:54.873327 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:55 crc kubenswrapper[5108]: I0219 00:12:55.049293 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"045b2e9c41a481f2ed9e57d9f0a85da86400e97346b9e1db4d8b8a891398e480"} Feb 19 00:12:56 crc kubenswrapper[5108]: I0219 00:12:56.057914 5108 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="c31159e43f62fa4f6cc54fe2116764f1c1a87ecb90bd7ccc1735cf9fbe076606" exitCode=0 Feb 19 00:12:56 crc kubenswrapper[5108]: I0219 00:12:56.058071 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"c31159e43f62fa4f6cc54fe2116764f1c1a87ecb90bd7ccc1735cf9fbe076606"} Feb 19 00:12:56 crc kubenswrapper[5108]: I0219 00:12:56.058495 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:12:56 crc kubenswrapper[5108]: I0219 00:12:56.059031 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:12:56 crc kubenswrapper[5108]: I0219 00:12:56.059080 5108 status_manager.go:895] "Failed to get status for pod" podUID="03c83199-6793-49da-834a-e14fa7b0488c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Feb 19 00:12:56 crc kubenswrapper[5108]: E0219 00:12:56.059591 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:57 crc kubenswrapper[5108]: I0219 00:12:57.071851 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:12:57 crc kubenswrapper[5108]: I0219 00:12:57.072231 5108 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95" exitCode=1 Feb 19 00:12:57 crc kubenswrapper[5108]: I0219 00:12:57.072379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95"} Feb 19 00:12:57 crc kubenswrapper[5108]: I0219 00:12:57.073219 5108 scope.go:117] "RemoveContainer" containerID="38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95" Feb 19 00:12:57 crc kubenswrapper[5108]: I0219 00:12:57.077106 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a88d1705a1ba2b45a710cd15865abe8239bc62efca2f4ad00bdbf70364e6f37a"} Feb 19 00:12:57 crc kubenswrapper[5108]: I0219 00:12:57.077153 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ff3b0ee4b1372fcb04728a491e0a4b1d665bd52e7c956f4d0c37a70510faaf05"} Feb 19 00:12:58 crc kubenswrapper[5108]: I0219 00:12:58.085089 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:12:58 crc kubenswrapper[5108]: I0219 00:12:58.085259 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8cb3df0a6ab68213639574e074b739ce82a53a942ee77ee186fdce9ad1867037"} Feb 19 00:12:58 crc kubenswrapper[5108]: I0219 00:12:58.089013 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e4772d5b8a7738ffb5b1556724df6282ba03901dfdc6f2c9b85a305724a62e2e"} Feb 19 00:12:58 crc kubenswrapper[5108]: I0219 00:12:58.089056 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ff9bfc979afe238d622d828a031d78d15b1181826d782c22fe8e11c1c9ce0fe3"} Feb 19 00:12:58 crc kubenswrapper[5108]: I0219 00:12:58.089066 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"fb21fca37fbf374d963a596815388aa84c39652a31fe6f401620424d1400db18"} Feb 19 00:12:58 crc kubenswrapper[5108]: I0219 00:12:58.089237 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:58 crc kubenswrapper[5108]: I0219 00:12:58.089376 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:12:58 crc kubenswrapper[5108]: I0219 00:12:58.089407 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:12:59 crc kubenswrapper[5108]: I0219 00:12:59.874501 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5108]: I0219 00:12:59.874905 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5108]: I0219 00:12:59.884284 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:03 crc kubenswrapper[5108]: I0219 00:13:03.276365 5108 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:03 crc kubenswrapper[5108]: I0219 00:13:03.277037 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:03 crc kubenswrapper[5108]: I0219 00:13:03.397121 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="8338d9a3-d45b-4d66-8418-ca473c0d7c8e" Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.134825 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.134865 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.139790 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="8338d9a3-d45b-4d66-8418-ca473c0d7c8e" Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.143900 5108 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://ff3b0ee4b1372fcb04728a491e0a4b1d665bd52e7c956f4d0c37a70510faaf05" Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.143960 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.214533 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.214896 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.215137 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 00:13:04 crc kubenswrapper[5108]: I0219 00:13:04.960871 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:13:05 crc kubenswrapper[5108]: I0219 00:13:05.142458 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:13:05 crc kubenswrapper[5108]: I0219 00:13:05.142496 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d740232-965c-462f-99ca-35945243e20c" Feb 19 00:13:05 crc kubenswrapper[5108]: I0219 00:13:05.148319 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="8338d9a3-d45b-4d66-8418-ca473c0d7c8e" Feb 19 00:13:06 crc kubenswrapper[5108]: I0219 00:13:06.144987 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:13:06 crc kubenswrapper[5108]: I0219 00:13:06.145055 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:13:13 crc kubenswrapper[5108]: I0219 00:13:13.870002 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.213993 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.214063 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.254338 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.498292 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.573767 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.681478 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.739824 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.912138 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 19 00:13:14 crc kubenswrapper[5108]: I0219 00:13:14.961358 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.017470 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.094930 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.230038 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.246543 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.308634 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.439147 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.596926 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.669342 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.685224 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.698657 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 19 00:13:15 crc kubenswrapper[5108]: I0219 00:13:15.786113 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.016875 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.023499 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.173515 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.245360 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.343685 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.457021 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.547997 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.605461 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.627355 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:16 crc kubenswrapper[5108]: I0219 00:13:16.882929 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.036460 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.050119 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.184333 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.202289 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.225347 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.380834 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.412875 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.430656 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.430927 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.453228 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.525478 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.532363 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.547424 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.552820 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.601171 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.616658 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.770643 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.906776 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.982555 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 19 00:13:17 crc kubenswrapper[5108]: I0219 00:13:17.999607 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.040471 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.040925 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.054276 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.203536 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.257926 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.304400 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.381501 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.411376 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.541024 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.591288 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.666138 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.757970 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.812669 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.838620 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.847112 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 19 00:13:18 crc kubenswrapper[5108]: I0219 00:13:18.859687 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.101841 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.250306 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.253247 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.267890 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.282028 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.389310 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.500600 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.707743 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.725243 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.730888 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.760323 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.774341 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.804749 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.809125 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.809193 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.815387 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.826851 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.826837965 podStartE2EDuration="16.826837965s" podCreationTimestamp="2026-02-19 00:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:13:19.825840226 +0000 UTC m=+258.792486534" watchObservedRunningTime="2026-02-19 00:13:19.826837965 +0000 UTC m=+258.793484273" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.850365 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.915609 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.951227 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 19 00:13:19 crc kubenswrapper[5108]: I0219 00:13:19.979433 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.039874 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.130360 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.176935 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.208154 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.338099 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.404525 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.419702 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.514426 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.533759 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.648288 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.794450 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.794450 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.819987 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 19 00:13:20 crc kubenswrapper[5108]: I0219 00:13:20.885696 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.010338 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.023293 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.038536 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.154831 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.334718 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.375499 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.414250 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.502295 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.660257 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.668322 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.670755 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.736781 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.780657 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.787443 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.833549 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.855326 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.870220 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.872798 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.910171 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.954247 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 19 00:13:21 crc kubenswrapper[5108]: I0219 00:13:21.992783 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.021775 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.023255 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.052892 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.094114 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.109659 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.162432 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.169612 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.181078 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.187776 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.267224 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.290068 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.293794 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.304596 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.387123 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.426382 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.462239 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.488563 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.630543 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.679923 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.782260 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.787479 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.827771 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.840155 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 19 00:13:22 crc kubenswrapper[5108]: I0219 00:13:22.966033 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.010408 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.114878 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.261055 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.297413 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.315598 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.421730 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.480925 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.632049 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.674604 5108 ???:1] "http: TLS handshake error from 192.168.126.11:34032: no serving certificate available for the kubelet" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.696682 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.702957 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.852290 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.855808 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.863816 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.863966 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.877669 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.914990 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.915095 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 19 00:13:23 crc kubenswrapper[5108]: I0219 00:13:23.952762 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.031037 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.109565 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.160955 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.173847 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.214290 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.214372 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.214429 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.215201 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"8cb3df0a6ab68213639574e074b739ce82a53a942ee77ee186fdce9ad1867037"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.215348 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://8cb3df0a6ab68213639574e074b739ce82a53a942ee77ee186fdce9ad1867037" gracePeriod=30 Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.220988 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.246921 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.288528 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.315820 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.465837 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.518543 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.518544 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.533187 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.540517 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.555048 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.574336 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.598604 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.642026 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.660710 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.669804 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.749893 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.751636 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.852516 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.952695 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.977735 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 19 00:13:24 crc kubenswrapper[5108]: I0219 00:13:24.993111 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.024396 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.102856 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.188410 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.280951 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.312408 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.316821 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.325190 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.379080 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.397572 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.423417 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.458003 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.481616 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.514753 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.627981 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.659836 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.679693 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.684486 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.716076 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.757158 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.776842 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.806598 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.879357 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.966613 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.966894 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b" gracePeriod=5 Feb 19 00:13:25 crc kubenswrapper[5108]: I0219 00:13:25.973047 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.030599 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.073558 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.131160 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.162029 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.181786 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.204444 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.210758 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.241759 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.273138 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.437519 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.440380 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.478926 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.519637 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.699164 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.795819 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.902899 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 19 00:13:26 crc kubenswrapper[5108]: I0219 00:13:26.996586 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.027286 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.055013 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.100601 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.170874 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.191682 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.253978 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.340856 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.437298 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.469296 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.472380 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.671204 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 19 00:13:27 crc kubenswrapper[5108]: I0219 00:13:27.969856 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 19 00:13:28 crc kubenswrapper[5108]: I0219 00:13:28.025264 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:28 crc kubenswrapper[5108]: I0219 00:13:28.078845 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 19 00:13:28 crc kubenswrapper[5108]: I0219 00:13:28.134914 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:28 crc kubenswrapper[5108]: I0219 00:13:28.436422 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 19 00:13:28 crc kubenswrapper[5108]: I0219 00:13:28.480007 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 19 00:13:28 crc kubenswrapper[5108]: I0219 00:13:28.624634 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 19 00:13:28 crc kubenswrapper[5108]: I0219 00:13:28.977619 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 19 00:13:29 crc kubenswrapper[5108]: I0219 00:13:29.016781 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 19 00:13:29 crc kubenswrapper[5108]: I0219 00:13:29.095086 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 19 00:13:29 crc kubenswrapper[5108]: I0219 00:13:29.206369 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 19 00:13:29 crc kubenswrapper[5108]: I0219 00:13:29.388930 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 19 00:13:29 crc kubenswrapper[5108]: I0219 00:13:29.795365 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 19 00:13:29 crc kubenswrapper[5108]: I0219 00:13:29.834374 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 19 00:13:29 crc kubenswrapper[5108]: I0219 00:13:29.921911 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 19 00:13:30 crc kubenswrapper[5108]: I0219 00:13:30.302168 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 19 00:13:30 crc kubenswrapper[5108]: I0219 00:13:30.460192 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 19 00:13:30 crc kubenswrapper[5108]: I0219 00:13:30.755617 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.101694 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.101773 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.103501 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.126041 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.176385 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.200574 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.200633 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.200679 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.200704 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.200870 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.201237 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.201245 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.201272 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.201302 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.209627 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.302764 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.302808 5108 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.302823 5108 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.302834 5108 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.302845 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.317621 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.317700 5108 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b" exitCode=137 Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.317780 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.317873 5108 scope.go:117] "RemoveContainer" containerID="faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.334869 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.340092 5108 scope.go:117] "RemoveContainer" containerID="faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b" Feb 19 00:13:31 crc kubenswrapper[5108]: E0219 00:13:31.340597 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b\": container with ID starting with faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b not found: ID does not exist" containerID="faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.340631 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b"} err="failed to get container status \"faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b\": rpc error: code = NotFound desc = could not find container \"faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b\": container with ID starting with faca85772451371571c10c91c98825b52331d435435b02689665efcc7612708b not found: ID does not exist" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.856535 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.858804 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 19 00:13:31 crc kubenswrapper[5108]: I0219 00:13:31.905260 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 19 00:13:36 crc kubenswrapper[5108]: I0219 00:13:36.145742 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:13:36 crc kubenswrapper[5108]: I0219 00:13:36.146641 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:13:54 crc kubenswrapper[5108]: I0219 00:13:54.462712 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:13:54 crc kubenswrapper[5108]: I0219 00:13:54.465161 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:13:54 crc kubenswrapper[5108]: I0219 00:13:54.465221 5108 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="8cb3df0a6ab68213639574e074b739ce82a53a942ee77ee186fdce9ad1867037" exitCode=137 Feb 19 00:13:54 crc kubenswrapper[5108]: I0219 00:13:54.465314 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"8cb3df0a6ab68213639574e074b739ce82a53a942ee77ee186fdce9ad1867037"} Feb 19 00:13:54 crc kubenswrapper[5108]: I0219 00:13:54.465373 5108 scope.go:117] "RemoveContainer" containerID="38b7faa07e761acd9208316fefe7335a94397d5ddbe6b7baa6fda685d86e3e95" Feb 19 00:13:55 crc kubenswrapper[5108]: I0219 00:13:55.478028 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:13:55 crc kubenswrapper[5108]: I0219 00:13:55.480224 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"92a7e03644087e2db0137aa9fc2ee100b4ebdf28b6233213779e2249994d6c38"} Feb 19 00:14:02 crc kubenswrapper[5108]: I0219 00:14:02.051041 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:14:02 crc kubenswrapper[5108]: I0219 00:14:02.051083 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:14:04 crc kubenswrapper[5108]: I0219 00:14:04.213776 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:14:04 crc kubenswrapper[5108]: I0219 00:14:04.220416 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:14:04 crc kubenswrapper[5108]: I0219 00:14:04.546569 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:14:04 crc kubenswrapper[5108]: I0219 00:14:04.551072 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.145512 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.145611 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.145678 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.146464 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b8644414b23c69cc69ee1daf8f442b3f33a0c424abf081e0b094c5eb0209682"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.146553 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://9b8644414b23c69cc69ee1daf8f442b3f33a0c424abf081e0b094c5eb0209682" gracePeriod=600 Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.275150 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.563601 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="9b8644414b23c69cc69ee1daf8f442b3f33a0c424abf081e0b094c5eb0209682" exitCode=0 Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.563708 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"9b8644414b23c69cc69ee1daf8f442b3f33a0c424abf081e0b094c5eb0209682"} Feb 19 00:14:06 crc kubenswrapper[5108]: I0219 00:14:06.563759 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"fba2b7e8ff51ea182b75c4b0b3700458f1f8f0a3b312a9f4de0528c981dea8d7"} Feb 19 00:14:11 crc kubenswrapper[5108]: I0219 00:14:11.081042 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:14:14 crc kubenswrapper[5108]: I0219 00:14:14.557481 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qxx5n"] Feb 19 00:14:14 crc kubenswrapper[5108]: I0219 00:14:14.558226 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" podUID="603b852f-0dcf-40af-b879-4df324bb8326" containerName="controller-manager" containerID="cri-o://4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470" gracePeriod=30 Feb 19 00:14:14 crc kubenswrapper[5108]: I0219 00:14:14.567877 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65"] Feb 19 00:14:14 crc kubenswrapper[5108]: I0219 00:14:14.568171 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" podUID="62ae86d1-5727-4420-9503-8d2aa58266ff" containerName="route-controller-manager" containerID="cri-o://7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312" gracePeriod=30 Feb 19 00:14:14 crc kubenswrapper[5108]: I0219 00:14:14.649050 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-qxx5n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 19 00:14:14 crc kubenswrapper[5108]: I0219 00:14:14.649062 5108 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-lkp65 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 19 00:14:14 crc kubenswrapper[5108]: I0219 00:14:14.649122 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" podUID="603b852f-0dcf-40af-b879-4df324bb8326" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 19 00:14:14 crc kubenswrapper[5108]: I0219 00:14:14.649179 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" podUID="62ae86d1-5727-4420-9503-8d2aa58266ff" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.011761 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.058996 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56b7454444-zgckz"] Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059697 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="603b852f-0dcf-40af-b879-4df324bb8326" containerName="controller-manager" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059720 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="603b852f-0dcf-40af-b879-4df324bb8326" containerName="controller-manager" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059738 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059746 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059759 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03c83199-6793-49da-834a-e14fa7b0488c" containerName="installer" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059767 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c83199-6793-49da-834a-e14fa7b0488c" containerName="installer" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059870 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059887 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="603b852f-0dcf-40af-b879-4df324bb8326" containerName="controller-manager" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.059898 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="03c83199-6793-49da-834a-e14fa7b0488c" containerName="installer" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.063908 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.067362 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56b7454444-zgckz"] Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.083036 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.109841 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf"] Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.110408 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="62ae86d1-5727-4420-9503-8d2aa58266ff" containerName="route-controller-manager" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.110425 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ae86d1-5727-4420-9503-8d2aa58266ff" containerName="route-controller-manager" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.110541 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="62ae86d1-5727-4420-9503-8d2aa58266ff" containerName="route-controller-manager" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.116222 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.118477 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf"] Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.134586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-client-ca\") pod \"603b852f-0dcf-40af-b879-4df324bb8326\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.134858 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-proxy-ca-bundles\") pod \"603b852f-0dcf-40af-b879-4df324bb8326\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.134982 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-config\") pod \"603b852f-0dcf-40af-b879-4df324bb8326\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.135097 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603b852f-0dcf-40af-b879-4df324bb8326-serving-cert\") pod \"603b852f-0dcf-40af-b879-4df324bb8326\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.135175 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5skf4\" (UniqueName: \"kubernetes.io/projected/603b852f-0dcf-40af-b879-4df324bb8326-kube-api-access-5skf4\") pod \"603b852f-0dcf-40af-b879-4df324bb8326\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.135265 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/603b852f-0dcf-40af-b879-4df324bb8326-tmp\") pod \"603b852f-0dcf-40af-b879-4df324bb8326\" (UID: \"603b852f-0dcf-40af-b879-4df324bb8326\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.135449 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-client-ca" (OuterVolumeSpecName: "client-ca") pod "603b852f-0dcf-40af-b879-4df324bb8326" (UID: "603b852f-0dcf-40af-b879-4df324bb8326"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.135646 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.136333 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603b852f-0dcf-40af-b879-4df324bb8326-tmp" (OuterVolumeSpecName: "tmp") pod "603b852f-0dcf-40af-b879-4df324bb8326" (UID: "603b852f-0dcf-40af-b879-4df324bb8326"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.137129 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-config" (OuterVolumeSpecName: "config") pod "603b852f-0dcf-40af-b879-4df324bb8326" (UID: "603b852f-0dcf-40af-b879-4df324bb8326"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.137740 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "603b852f-0dcf-40af-b879-4df324bb8326" (UID: "603b852f-0dcf-40af-b879-4df324bb8326"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.141289 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603b852f-0dcf-40af-b879-4df324bb8326-kube-api-access-5skf4" (OuterVolumeSpecName: "kube-api-access-5skf4") pod "603b852f-0dcf-40af-b879-4df324bb8326" (UID: "603b852f-0dcf-40af-b879-4df324bb8326"). InnerVolumeSpecName "kube-api-access-5skf4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.146251 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603b852f-0dcf-40af-b879-4df324bb8326-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "603b852f-0dcf-40af-b879-4df324bb8326" (UID: "603b852f-0dcf-40af-b879-4df324bb8326"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236112 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-config\") pod \"62ae86d1-5727-4420-9503-8d2aa58266ff\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236200 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62ae86d1-5727-4420-9503-8d2aa58266ff-serving-cert\") pod \"62ae86d1-5727-4420-9503-8d2aa58266ff\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236375 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnwb8\" (UniqueName: \"kubernetes.io/projected/62ae86d1-5727-4420-9503-8d2aa58266ff-kube-api-access-nnwb8\") pod \"62ae86d1-5727-4420-9503-8d2aa58266ff\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236450 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/62ae86d1-5727-4420-9503-8d2aa58266ff-tmp\") pod \"62ae86d1-5727-4420-9503-8d2aa58266ff\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236475 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-client-ca\") pod \"62ae86d1-5727-4420-9503-8d2aa58266ff\" (UID: \"62ae86d1-5727-4420-9503-8d2aa58266ff\") " Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236569 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wl9\" (UniqueName: \"kubernetes.io/projected/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-kube-api-access-w4wl9\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236610 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-config\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236638 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-config\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236679 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnq6m\" (UniqueName: \"kubernetes.io/projected/64884e18-0bd9-4d84-a408-70fdd654e6e7-kube-api-access-bnq6m\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236706 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-serving-cert\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236751 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62ae86d1-5727-4420-9503-8d2aa58266ff-tmp" (OuterVolumeSpecName: "tmp") pod "62ae86d1-5727-4420-9503-8d2aa58266ff" (UID: "62ae86d1-5727-4420-9503-8d2aa58266ff"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236806 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-tmp\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236960 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-config" (OuterVolumeSpecName: "config") pod "62ae86d1-5727-4420-9503-8d2aa58266ff" (UID: "62ae86d1-5727-4420-9503-8d2aa58266ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.236988 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64884e18-0bd9-4d84-a408-70fdd654e6e7-serving-cert\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237047 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-proxy-ca-bundles\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237124 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-client-ca\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237195 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-client-ca\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237221 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/64884e18-0bd9-4d84-a408-70fdd654e6e7-tmp\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237305 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5skf4\" (UniqueName: \"kubernetes.io/projected/603b852f-0dcf-40af-b879-4df324bb8326-kube-api-access-5skf4\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237327 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/603b852f-0dcf-40af-b879-4df324bb8326-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237341 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/62ae86d1-5727-4420-9503-8d2aa58266ff-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237354 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237366 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237380 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603b852f-0dcf-40af-b879-4df324bb8326-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237391 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603b852f-0dcf-40af-b879-4df324bb8326-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.237667 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-client-ca" (OuterVolumeSpecName: "client-ca") pod "62ae86d1-5727-4420-9503-8d2aa58266ff" (UID: "62ae86d1-5727-4420-9503-8d2aa58266ff"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.240637 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ae86d1-5727-4420-9503-8d2aa58266ff-kube-api-access-nnwb8" (OuterVolumeSpecName: "kube-api-access-nnwb8") pod "62ae86d1-5727-4420-9503-8d2aa58266ff" (UID: "62ae86d1-5727-4420-9503-8d2aa58266ff"). InnerVolumeSpecName "kube-api-access-nnwb8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.241040 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ae86d1-5727-4420-9503-8d2aa58266ff-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "62ae86d1-5727-4420-9503-8d2aa58266ff" (UID: "62ae86d1-5727-4420-9503-8d2aa58266ff"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.338885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bnq6m\" (UniqueName: \"kubernetes.io/projected/64884e18-0bd9-4d84-a408-70fdd654e6e7-kube-api-access-bnq6m\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.338962 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-serving-cert\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.338997 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-tmp\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339042 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64884e18-0bd9-4d84-a408-70fdd654e6e7-serving-cert\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339073 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-proxy-ca-bundles\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339103 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-client-ca\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339137 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-client-ca\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339157 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/64884e18-0bd9-4d84-a408-70fdd654e6e7-tmp\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wl9\" (UniqueName: \"kubernetes.io/projected/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-kube-api-access-w4wl9\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339245 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-config\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339276 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-config\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339318 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nnwb8\" (UniqueName: \"kubernetes.io/projected/62ae86d1-5727-4420-9503-8d2aa58266ff-kube-api-access-nnwb8\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339333 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62ae86d1-5727-4420-9503-8d2aa58266ff-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.339344 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62ae86d1-5727-4420-9503-8d2aa58266ff-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.340497 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/64884e18-0bd9-4d84-a408-70fdd654e6e7-tmp\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.340896 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-client-ca\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.340976 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-client-ca\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.341030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-config\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.341416 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-tmp\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.341481 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-config\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.341623 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64884e18-0bd9-4d84-a408-70fdd654e6e7-proxy-ca-bundles\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.345224 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-serving-cert\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.345252 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64884e18-0bd9-4d84-a408-70fdd654e6e7-serving-cert\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.361800 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wl9\" (UniqueName: \"kubernetes.io/projected/5f47ac53-5cce-40a5-a12c-e9f437ffc26c-kube-api-access-w4wl9\") pod \"route-controller-manager-84d5584cf6-s29qf\" (UID: \"5f47ac53-5cce-40a5-a12c-e9f437ffc26c\") " pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.362858 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnq6m\" (UniqueName: \"kubernetes.io/projected/64884e18-0bd9-4d84-a408-70fdd654e6e7-kube-api-access-bnq6m\") pod \"controller-manager-56b7454444-zgckz\" (UID: \"64884e18-0bd9-4d84-a408-70fdd654e6e7\") " pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.402133 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.431761 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.625486 5108 generic.go:358] "Generic (PLEG): container finished" podID="603b852f-0dcf-40af-b879-4df324bb8326" containerID="4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470" exitCode=0 Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.626822 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" event={"ID":"603b852f-0dcf-40af-b879-4df324bb8326","Type":"ContainerDied","Data":"4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470"} Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.626951 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" event={"ID":"603b852f-0dcf-40af-b879-4df324bb8326","Type":"ContainerDied","Data":"3bfac49b1f0e29d933f1140c8a416a5e9865655f2ee6d7948a482ce1d9cc94bd"} Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.626974 5108 scope.go:117] "RemoveContainer" containerID="4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.627220 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-qxx5n" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.633543 5108 generic.go:358] "Generic (PLEG): container finished" podID="62ae86d1-5727-4420-9503-8d2aa58266ff" containerID="7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312" exitCode=0 Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.633669 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" event={"ID":"62ae86d1-5727-4420-9503-8d2aa58266ff","Type":"ContainerDied","Data":"7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312"} Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.633694 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" event={"ID":"62ae86d1-5727-4420-9503-8d2aa58266ff","Type":"ContainerDied","Data":"5f411d8b755708fe88d577ddb3488c0ded653dbf4a62f478b941011ac6833e4e"} Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.633765 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.659066 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56b7454444-zgckz"] Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.670275 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65"] Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.673467 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lkp65"] Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.678719 5108 scope.go:117] "RemoveContainer" containerID="4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470" Feb 19 00:14:15 crc kubenswrapper[5108]: E0219 00:14:15.679092 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470\": container with ID starting with 4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470 not found: ID does not exist" containerID="4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.679217 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470"} err="failed to get container status \"4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470\": rpc error: code = NotFound desc = could not find container \"4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470\": container with ID starting with 4671a3d02314cd0b822f6974b35ba28e8c37730c2e184efcb96f3b847d2cd470 not found: ID does not exist" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.679327 5108 scope.go:117] "RemoveContainer" containerID="7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312" Feb 19 00:14:15 crc kubenswrapper[5108]: W0219 00:14:15.684169 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64884e18_0bd9_4d84_a408_70fdd654e6e7.slice/crio-45ddab941ecec07659d421def51e5c513a83dcb833698a4ffc1a56941f5ce445 WatchSource:0}: Error finding container 45ddab941ecec07659d421def51e5c513a83dcb833698a4ffc1a56941f5ce445: Status 404 returned error can't find the container with id 45ddab941ecec07659d421def51e5c513a83dcb833698a4ffc1a56941f5ce445 Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.691264 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qxx5n"] Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.697261 5108 scope.go:117] "RemoveContainer" containerID="7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.697634 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qxx5n"] Feb 19 00:14:15 crc kubenswrapper[5108]: E0219 00:14:15.697665 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312\": container with ID starting with 7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312 not found: ID does not exist" containerID="7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.697702 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312"} err="failed to get container status \"7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312\": rpc error: code = NotFound desc = could not find container \"7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312\": container with ID starting with 7a465c7fccca4f8d196523b708d11b24d073b2b5e919e6fe0d196a916955d312 not found: ID does not exist" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.735492 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf"] Feb 19 00:14:15 crc kubenswrapper[5108]: W0219 00:14:15.746794 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f47ac53_5cce_40a5_a12c_e9f437ffc26c.slice/crio-d483654ba5926d5ceabe25eae5f7c15cfd8869c6a4eb73eef73d75049b6ccca9 WatchSource:0}: Error finding container d483654ba5926d5ceabe25eae5f7c15cfd8869c6a4eb73eef73d75049b6ccca9: Status 404 returned error can't find the container with id d483654ba5926d5ceabe25eae5f7c15cfd8869c6a4eb73eef73d75049b6ccca9 Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.855764 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603b852f-0dcf-40af-b879-4df324bb8326" path="/var/lib/kubelet/pods/603b852f-0dcf-40af-b879-4df324bb8326/volumes" Feb 19 00:14:15 crc kubenswrapper[5108]: I0219 00:14:15.856819 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ae86d1-5727-4420-9503-8d2aa58266ff" path="/var/lib/kubelet/pods/62ae86d1-5727-4420-9503-8d2aa58266ff/volumes" Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.640816 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" event={"ID":"5f47ac53-5cce-40a5-a12c-e9f437ffc26c","Type":"ContainerStarted","Data":"17828a34f02b2f7740cd9c6f80878c581a172fd327dab2e3c71b9f28d1ca40b8"} Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.640855 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" event={"ID":"5f47ac53-5cce-40a5-a12c-e9f437ffc26c","Type":"ContainerStarted","Data":"d483654ba5926d5ceabe25eae5f7c15cfd8869c6a4eb73eef73d75049b6ccca9"} Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.643095 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.644958 5108 generic.go:358] "Generic (PLEG): container finished" podID="5336aa1a-347f-403d-8bb6-882d11120822" containerID="5ce9ca5c0a8abd5dc27f4dec993e9cc0bad46b6f4f8c8216af8046b994868f1e" exitCode=0 Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.645025 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-mpp5j" event={"ID":"5336aa1a-347f-403d-8bb6-882d11120822","Type":"ContainerDied","Data":"5ce9ca5c0a8abd5dc27f4dec993e9cc0bad46b6f4f8c8216af8046b994868f1e"} Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.647172 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.649397 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" event={"ID":"64884e18-0bd9-4d84-a408-70fdd654e6e7","Type":"ContainerStarted","Data":"f9d67f2c9cddd241393e2a5b517fe395a0c6fcad396803d267e0d1c64330a4de"} Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.649426 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" event={"ID":"64884e18-0bd9-4d84-a408-70fdd654e6e7","Type":"ContainerStarted","Data":"45ddab941ecec07659d421def51e5c513a83dcb833698a4ffc1a56941f5ce445"} Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.649642 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.654880 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.660658 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84d5584cf6-s29qf" podStartSLOduration=2.660647509 podStartE2EDuration="2.660647509s" podCreationTimestamp="2026-02-19 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:14:16.659547668 +0000 UTC m=+315.626193976" watchObservedRunningTime="2026-02-19 00:14:16.660647509 +0000 UTC m=+315.627293817" Feb 19 00:14:16 crc kubenswrapper[5108]: I0219 00:14:16.676847 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56b7454444-zgckz" podStartSLOduration=2.676834229 podStartE2EDuration="2.676834229s" podCreationTimestamp="2026-02-19 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:14:16.675891502 +0000 UTC m=+315.642537810" watchObservedRunningTime="2026-02-19 00:14:16.676834229 +0000 UTC m=+315.643480537" Feb 19 00:14:17 crc kubenswrapper[5108]: I0219 00:14:17.946154 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:14:17 crc kubenswrapper[5108]: I0219 00:14:17.977134 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5336aa1a-347f-403d-8bb6-882d11120822-serviceca\") pod \"5336aa1a-347f-403d-8bb6-882d11120822\" (UID: \"5336aa1a-347f-403d-8bb6-882d11120822\") " Feb 19 00:14:17 crc kubenswrapper[5108]: I0219 00:14:17.978202 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5336aa1a-347f-403d-8bb6-882d11120822-serviceca" (OuterVolumeSpecName: "serviceca") pod "5336aa1a-347f-403d-8bb6-882d11120822" (UID: "5336aa1a-347f-403d-8bb6-882d11120822"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:14:18 crc kubenswrapper[5108]: I0219 00:14:18.078273 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjnl9\" (UniqueName: \"kubernetes.io/projected/5336aa1a-347f-403d-8bb6-882d11120822-kube-api-access-hjnl9\") pod \"5336aa1a-347f-403d-8bb6-882d11120822\" (UID: \"5336aa1a-347f-403d-8bb6-882d11120822\") " Feb 19 00:14:18 crc kubenswrapper[5108]: I0219 00:14:18.078478 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5336aa1a-347f-403d-8bb6-882d11120822-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:18 crc kubenswrapper[5108]: I0219 00:14:18.084550 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5336aa1a-347f-403d-8bb6-882d11120822-kube-api-access-hjnl9" (OuterVolumeSpecName: "kube-api-access-hjnl9") pod "5336aa1a-347f-403d-8bb6-882d11120822" (UID: "5336aa1a-347f-403d-8bb6-882d11120822"). InnerVolumeSpecName "kube-api-access-hjnl9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:18 crc kubenswrapper[5108]: I0219 00:14:18.179299 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hjnl9\" (UniqueName: \"kubernetes.io/projected/5336aa1a-347f-403d-8bb6-882d11120822-kube-api-access-hjnl9\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:18 crc kubenswrapper[5108]: I0219 00:14:18.663840 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-mpp5j" event={"ID":"5336aa1a-347f-403d-8bb6-882d11120822","Type":"ContainerDied","Data":"94d45cfb870acf3761a8b312f729cdf86ac5869246eaedafefca83782cf8b237"} Feb 19 00:14:18 crc kubenswrapper[5108]: I0219 00:14:18.663903 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94d45cfb870acf3761a8b312f729cdf86ac5869246eaedafefca83782cf8b237" Feb 19 00:14:18 crc kubenswrapper[5108]: I0219 00:14:18.663851 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-mpp5j" Feb 19 00:14:34 crc kubenswrapper[5108]: I0219 00:14:34.884618 5108 ???:1] "http: TLS handshake error from 192.168.126.11:35020: no serving certificate available for the kubelet" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.399918 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7g27t"] Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.400817 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7g27t" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="registry-server" containerID="cri-o://d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954" gracePeriod=30 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.414996 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bf6wt"] Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.415366 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bf6wt" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="registry-server" containerID="cri-o://fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56" gracePeriod=30 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.429771 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k745b"] Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.430010 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" podUID="1a52a4e5-9502-4222-8090-3c18943abd74" containerName="marketplace-operator" containerID="cri-o://9f7294aa24b5b6a57fcfe7a4cba4d508dfc953f676ec6b571542117ef5f6d5f5" gracePeriod=30 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.440197 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-df8pn"] Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.440527 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-df8pn" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" containerName="registry-server" containerID="cri-o://5d07707604dc8cad65aed0301c1862990e0f5ee0f21acc3118e332386938b333" gracePeriod=30 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.447977 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pgh2p"] Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.448301 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pgh2p" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="registry-server" containerID="cri-o://a8163bc7543e908e819e02d90dae254a8028c133bb32588bec9906b432ffddb1" gracePeriod=30 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.457017 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-7bgw9"] Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.457557 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5336aa1a-347f-403d-8bb6-882d11120822" containerName="image-pruner" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.457574 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5336aa1a-347f-403d-8bb6-882d11120822" containerName="image-pruner" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.457654 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="5336aa1a-347f-403d-8bb6-882d11120822" containerName="image-pruner" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.464366 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.471489 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-7bgw9"] Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.474269 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48bda508-98fc-4c83-bbf1-98ad97774a97-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.474304 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48bda508-98fc-4c83-bbf1-98ad97774a97-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.474432 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8zd8\" (UniqueName: \"kubernetes.io/projected/48bda508-98fc-4c83-bbf1-98ad97774a97-kube-api-access-v8zd8\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.474466 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48bda508-98fc-4c83-bbf1-98ad97774a97-tmp\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: E0219 00:14:50.493117 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954 is running failed: container process not found" containerID="d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:14:50 crc kubenswrapper[5108]: E0219 00:14:50.497140 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954 is running failed: container process not found" containerID="d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:14:50 crc kubenswrapper[5108]: E0219 00:14:50.497378 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56 is running failed: container process not found" containerID="fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:14:50 crc kubenswrapper[5108]: E0219 00:14:50.497447 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954 is running failed: container process not found" containerID="d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:14:50 crc kubenswrapper[5108]: E0219 00:14:50.498289 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-7g27t" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="registry-server" probeResult="unknown" Feb 19 00:14:50 crc kubenswrapper[5108]: E0219 00:14:50.499195 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56 is running failed: container process not found" containerID="fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:14:50 crc kubenswrapper[5108]: E0219 00:14:50.499396 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56 is running failed: container process not found" containerID="fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:14:50 crc kubenswrapper[5108]: E0219 00:14:50.499425 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-bf6wt" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="registry-server" probeResult="unknown" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.580309 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v8zd8\" (UniqueName: \"kubernetes.io/projected/48bda508-98fc-4c83-bbf1-98ad97774a97-kube-api-access-v8zd8\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.580439 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48bda508-98fc-4c83-bbf1-98ad97774a97-tmp\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.580513 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48bda508-98fc-4c83-bbf1-98ad97774a97-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.580540 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48bda508-98fc-4c83-bbf1-98ad97774a97-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.581021 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48bda508-98fc-4c83-bbf1-98ad97774a97-tmp\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.582681 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48bda508-98fc-4c83-bbf1-98ad97774a97-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.594091 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48bda508-98fc-4c83-bbf1-98ad97774a97-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.597495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8zd8\" (UniqueName: \"kubernetes.io/projected/48bda508-98fc-4c83-bbf1-98ad97774a97-kube-api-access-v8zd8\") pod \"marketplace-operator-547dbd544d-7bgw9\" (UID: \"48bda508-98fc-4c83-bbf1-98ad97774a97\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.786838 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.872624 5108 generic.go:358] "Generic (PLEG): container finished" podID="1a52a4e5-9502-4222-8090-3c18943abd74" containerID="9f7294aa24b5b6a57fcfe7a4cba4d508dfc953f676ec6b571542117ef5f6d5f5" exitCode=0 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.873060 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" event={"ID":"1a52a4e5-9502-4222-8090-3c18943abd74","Type":"ContainerDied","Data":"9f7294aa24b5b6a57fcfe7a4cba4d508dfc953f676ec6b571542117ef5f6d5f5"} Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.880539 5108 generic.go:358] "Generic (PLEG): container finished" podID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerID="d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954" exitCode=0 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.880593 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g27t" event={"ID":"0aefb89a-2ddc-4334-9bab-28390ba5a389","Type":"ContainerDied","Data":"d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954"} Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.904318 5108 generic.go:358] "Generic (PLEG): container finished" podID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerID="fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56" exitCode=0 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.904463 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf6wt" event={"ID":"391cbbed-1038-47a8-aad5-bbe7e5cea901","Type":"ContainerDied","Data":"fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56"} Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.932863 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.933053 5108 generic.go:358] "Generic (PLEG): container finished" podID="7024eadd-8a38-49f7-996f-bb49882d226e" containerID="5d07707604dc8cad65aed0301c1862990e0f5ee0f21acc3118e332386938b333" exitCode=0 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.933182 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df8pn" event={"ID":"7024eadd-8a38-49f7-996f-bb49882d226e","Type":"ContainerDied","Data":"5d07707604dc8cad65aed0301c1862990e0f5ee0f21acc3118e332386938b333"} Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.947345 5108 generic.go:358] "Generic (PLEG): container finished" podID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerID="a8163bc7543e908e819e02d90dae254a8028c133bb32588bec9906b432ffddb1" exitCode=0 Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.947454 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgh2p" event={"ID":"664a83e1-cb9d-4e9d-85c7-88a01dc6d040","Type":"ContainerDied","Data":"a8163bc7543e908e819e02d90dae254a8028c133bb32588bec9906b432ffddb1"} Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.979857 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.986492 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-catalog-content\") pod \"0aefb89a-2ddc-4334-9bab-28390ba5a389\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.986607 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-utilities\") pod \"0aefb89a-2ddc-4334-9bab-28390ba5a389\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.986643 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tmtt\" (UniqueName: \"kubernetes.io/projected/391cbbed-1038-47a8-aad5-bbe7e5cea901-kube-api-access-9tmtt\") pod \"391cbbed-1038-47a8-aad5-bbe7e5cea901\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.986667 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-utilities\") pod \"391cbbed-1038-47a8-aad5-bbe7e5cea901\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.986716 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-catalog-content\") pod \"391cbbed-1038-47a8-aad5-bbe7e5cea901\" (UID: \"391cbbed-1038-47a8-aad5-bbe7e5cea901\") " Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.986761 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb5mx\" (UniqueName: \"kubernetes.io/projected/0aefb89a-2ddc-4334-9bab-28390ba5a389-kube-api-access-tb5mx\") pod \"0aefb89a-2ddc-4334-9bab-28390ba5a389\" (UID: \"0aefb89a-2ddc-4334-9bab-28390ba5a389\") " Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.996676 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-utilities" (OuterVolumeSpecName: "utilities") pod "0aefb89a-2ddc-4334-9bab-28390ba5a389" (UID: "0aefb89a-2ddc-4334-9bab-28390ba5a389"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:50 crc kubenswrapper[5108]: I0219 00:14:50.996786 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-utilities" (OuterVolumeSpecName: "utilities") pod "391cbbed-1038-47a8-aad5-bbe7e5cea901" (UID: "391cbbed-1038-47a8-aad5-bbe7e5cea901"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.008472 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391cbbed-1038-47a8-aad5-bbe7e5cea901-kube-api-access-9tmtt" (OuterVolumeSpecName: "kube-api-access-9tmtt") pod "391cbbed-1038-47a8-aad5-bbe7e5cea901" (UID: "391cbbed-1038-47a8-aad5-bbe7e5cea901"). InnerVolumeSpecName "kube-api-access-9tmtt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.010850 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aefb89a-2ddc-4334-9bab-28390ba5a389-kube-api-access-tb5mx" (OuterVolumeSpecName: "kube-api-access-tb5mx") pod "0aefb89a-2ddc-4334-9bab-28390ba5a389" (UID: "0aefb89a-2ddc-4334-9bab-28390ba5a389"). InnerVolumeSpecName "kube-api-access-tb5mx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.025544 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.029343 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.048472 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0aefb89a-2ddc-4334-9bab-28390ba5a389" (UID: "0aefb89a-2ddc-4334-9bab-28390ba5a389"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.057809 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.083379 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "391cbbed-1038-47a8-aad5-bbe7e5cea901" (UID: "391cbbed-1038-47a8-aad5-bbe7e5cea901"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.087249 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-utilities\") pod \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.087499 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-catalog-content\") pod \"7024eadd-8a38-49f7-996f-bb49882d226e\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.087612 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-trusted-ca\") pod \"1a52a4e5-9502-4222-8090-3c18943abd74\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.087783 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-catalog-content\") pod \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.087876 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc4lq\" (UniqueName: \"kubernetes.io/projected/7024eadd-8a38-49f7-996f-bb49882d226e-kube-api-access-kc4lq\") pod \"7024eadd-8a38-49f7-996f-bb49882d226e\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.087986 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-operator-metrics\") pod \"1a52a4e5-9502-4222-8090-3c18943abd74\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.088090 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7bxd\" (UniqueName: \"kubernetes.io/projected/1a52a4e5-9502-4222-8090-3c18943abd74-kube-api-access-r7bxd\") pod \"1a52a4e5-9502-4222-8090-3c18943abd74\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.088183 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1a52a4e5-9502-4222-8090-3c18943abd74-tmp\") pod \"1a52a4e5-9502-4222-8090-3c18943abd74\" (UID: \"1a52a4e5-9502-4222-8090-3c18943abd74\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.088256 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-utilities" (OuterVolumeSpecName: "utilities") pod "664a83e1-cb9d-4e9d-85c7-88a01dc6d040" (UID: "664a83e1-cb9d-4e9d-85c7-88a01dc6d040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.088362 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-utilities\") pod \"7024eadd-8a38-49f7-996f-bb49882d226e\" (UID: \"7024eadd-8a38-49f7-996f-bb49882d226e\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.088471 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqnbz\" (UniqueName: \"kubernetes.io/projected/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-kube-api-access-mqnbz\") pod \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\" (UID: \"664a83e1-cb9d-4e9d-85c7-88a01dc6d040\") " Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.088862 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.088986 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391cbbed-1038-47a8-aad5-bbe7e5cea901-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.089063 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tb5mx\" (UniqueName: \"kubernetes.io/projected/0aefb89a-2ddc-4334-9bab-28390ba5a389-kube-api-access-tb5mx\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.089126 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.089206 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.089332 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aefb89a-2ddc-4334-9bab-28390ba5a389-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.089412 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9tmtt\" (UniqueName: \"kubernetes.io/projected/391cbbed-1038-47a8-aad5-bbe7e5cea901-kube-api-access-9tmtt\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.090510 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a52a4e5-9502-4222-8090-3c18943abd74-tmp" (OuterVolumeSpecName: "tmp") pod "1a52a4e5-9502-4222-8090-3c18943abd74" (UID: "1a52a4e5-9502-4222-8090-3c18943abd74"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.091377 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1a52a4e5-9502-4222-8090-3c18943abd74" (UID: "1a52a4e5-9502-4222-8090-3c18943abd74"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.091618 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-utilities" (OuterVolumeSpecName: "utilities") pod "7024eadd-8a38-49f7-996f-bb49882d226e" (UID: "7024eadd-8a38-49f7-996f-bb49882d226e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.091714 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7024eadd-8a38-49f7-996f-bb49882d226e-kube-api-access-kc4lq" (OuterVolumeSpecName: "kube-api-access-kc4lq") pod "7024eadd-8a38-49f7-996f-bb49882d226e" (UID: "7024eadd-8a38-49f7-996f-bb49882d226e"). InnerVolumeSpecName "kube-api-access-kc4lq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.092174 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a52a4e5-9502-4222-8090-3c18943abd74-kube-api-access-r7bxd" (OuterVolumeSpecName: "kube-api-access-r7bxd") pod "1a52a4e5-9502-4222-8090-3c18943abd74" (UID: "1a52a4e5-9502-4222-8090-3c18943abd74"). InnerVolumeSpecName "kube-api-access-r7bxd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.096012 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1a52a4e5-9502-4222-8090-3c18943abd74" (UID: "1a52a4e5-9502-4222-8090-3c18943abd74"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.096706 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-kube-api-access-mqnbz" (OuterVolumeSpecName: "kube-api-access-mqnbz") pod "664a83e1-cb9d-4e9d-85c7-88a01dc6d040" (UID: "664a83e1-cb9d-4e9d-85c7-88a01dc6d040"). InnerVolumeSpecName "kube-api-access-mqnbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.105752 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7024eadd-8a38-49f7-996f-bb49882d226e" (UID: "7024eadd-8a38-49f7-996f-bb49882d226e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.190742 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.190779 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.190790 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kc4lq\" (UniqueName: \"kubernetes.io/projected/7024eadd-8a38-49f7-996f-bb49882d226e-kube-api-access-kc4lq\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.190799 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a52a4e5-9502-4222-8090-3c18943abd74-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.190808 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r7bxd\" (UniqueName: \"kubernetes.io/projected/1a52a4e5-9502-4222-8090-3c18943abd74-kube-api-access-r7bxd\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.190816 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1a52a4e5-9502-4222-8090-3c18943abd74-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.190824 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7024eadd-8a38-49f7-996f-bb49882d226e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.190832 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mqnbz\" (UniqueName: \"kubernetes.io/projected/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-kube-api-access-mqnbz\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.192724 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "664a83e1-cb9d-4e9d-85c7-88a01dc6d040" (UID: "664a83e1-cb9d-4e9d-85c7-88a01dc6d040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.280243 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-7bgw9"] Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.291597 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/664a83e1-cb9d-4e9d-85c7-88a01dc6d040-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.957738 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df8pn" event={"ID":"7024eadd-8a38-49f7-996f-bb49882d226e","Type":"ContainerDied","Data":"c65b14e7a7b4ab37d036f683a1228f2175fc6db62377b79910d621c57dec15ac"} Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.958100 5108 scope.go:117] "RemoveContainer" containerID="5d07707604dc8cad65aed0301c1862990e0f5ee0f21acc3118e332386938b333" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.957856 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-df8pn" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.963169 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgh2p" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.963161 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgh2p" event={"ID":"664a83e1-cb9d-4e9d-85c7-88a01dc6d040","Type":"ContainerDied","Data":"ca760ea02ff63c01eefee534db66301d7e5518ea841bb0779a1b3fabe141c884"} Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.965867 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" event={"ID":"1a52a4e5-9502-4222-8090-3c18943abd74","Type":"ContainerDied","Data":"fa13d7402dddbb0419dcb7fe4aae6ffe81ef24a23a6c7293a64c2dc31bdacef8"} Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.965965 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k745b" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.969098 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g27t" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.969105 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g27t" event={"ID":"0aefb89a-2ddc-4334-9bab-28390ba5a389","Type":"ContainerDied","Data":"4dbb53113a3aeff4d91c0da8f759613feeb2c5f6d6019293a98d6e9b6a7ad7cc"} Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.973138 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" event={"ID":"48bda508-98fc-4c83-bbf1-98ad97774a97","Type":"ContainerStarted","Data":"75ea6ddd76cf1d48a022d263f14d6105280849716884c8566afd079a9823168d"} Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.973176 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" event={"ID":"48bda508-98fc-4c83-bbf1-98ad97774a97","Type":"ContainerStarted","Data":"a519c543635be774bf606b9c9307643f92929186bc7cb0d9ff4a3853366ddf55"} Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.973383 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.976509 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.977439 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bf6wt" event={"ID":"391cbbed-1038-47a8-aad5-bbe7e5cea901","Type":"ContainerDied","Data":"d58c95913d1e7cf3e2be8d1d7d20249958a23ee7c1006df6cfcdafb47ea29556"} Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.977517 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bf6wt" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.996140 5108 scope.go:117] "RemoveContainer" containerID="499bf7708614511bbbd3e2e6cfe47e5c3eca104ffaa13331c006d8b012183b5d" Feb 19 00:14:51 crc kubenswrapper[5108]: I0219 00:14:51.999544 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-df8pn"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.004835 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-df8pn"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.022502 5108 scope.go:117] "RemoveContainer" containerID="7ec3a2858f2024d311ea006441198cc284e7da46e39e56cfdf63f269f0354c58" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.023718 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-7bgw9" podStartSLOduration=2.02368898 podStartE2EDuration="2.02368898s" podCreationTimestamp="2026-02-19 00:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:14:52.018557347 +0000 UTC m=+350.985203715" watchObservedRunningTime="2026-02-19 00:14:52.02368898 +0000 UTC m=+350.990335348" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.081218 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k745b"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.098424 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k745b"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.105829 5108 scope.go:117] "RemoveContainer" containerID="a8163bc7543e908e819e02d90dae254a8028c133bb32588bec9906b432ffddb1" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.128439 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pgh2p"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.135853 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pgh2p"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.136270 5108 scope.go:117] "RemoveContainer" containerID="e5a9555c5f9a3ee1fe3244db5fc8de41a71f45676afd98ddf86bbbc588828177" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.143385 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7g27t"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.147137 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7g27t"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.150398 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bf6wt"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.153866 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bf6wt"] Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.157787 5108 scope.go:117] "RemoveContainer" containerID="4b9f9e159f7a35077e60ab929362eeb9552f4fed3ec23346fd69ccf88a3dbd74" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.170742 5108 scope.go:117] "RemoveContainer" containerID="9f7294aa24b5b6a57fcfe7a4cba4d508dfc953f676ec6b571542117ef5f6d5f5" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.186380 5108 scope.go:117] "RemoveContainer" containerID="d0e2cb6084847b46d20c4d5f758a29bfc948475e706366777148b6c480d6f954" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.199296 5108 scope.go:117] "RemoveContainer" containerID="8f3e6d149cb943ee95cf07fd25c2cc98a080ff08897bab8cb65a6e9e8b149d00" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.212740 5108 scope.go:117] "RemoveContainer" containerID="77caeaa12cc4a5e04b77cf882496c3a68eeda12defa19b774aba151e4866c27b" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.227871 5108 scope.go:117] "RemoveContainer" containerID="fa0043e4a3b6405f80d187d4b031acf512426f83b1bb47388e7dc47bff72fe56" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.241638 5108 scope.go:117] "RemoveContainer" containerID="99e67a8bfbe8b0f39958777bc45cba02b813b5d9b1e692fa23dfc5de03ed2819" Feb 19 00:14:52 crc kubenswrapper[5108]: I0219 00:14:52.254256 5108 scope.go:117] "RemoveContainer" containerID="495a3d48fea778b33a19b80bedce40554d6f2ef21ee5c0e37358a319fd3f60d9" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.022161 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-49llx"] Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.023716 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="extract-content" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.023750 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="extract-content" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.023798 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.023815 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.023835 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="extract-utilities" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.023846 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="extract-utilities" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024062 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024120 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024141 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="extract-utilities" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024151 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="extract-utilities" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024167 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="extract-content" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024176 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="extract-content" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024227 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" containerName="extract-utilities" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024238 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" containerName="extract-utilities" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024252 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="extract-utilities" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024261 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="extract-utilities" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024308 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" containerName="extract-content" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024322 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" containerName="extract-content" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024335 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="extract-content" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024345 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="extract-content" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024398 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a52a4e5-9502-4222-8090-3c18943abd74" containerName="marketplace-operator" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024411 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a52a4e5-9502-4222-8090-3c18943abd74" containerName="marketplace-operator" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024426 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024435 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024480 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024512 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024644 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024664 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024677 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a52a4e5-9502-4222-8090-3c18943abd74" containerName="marketplace-operator" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024691 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.024707 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" containerName="registry-server" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.054440 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49llx"] Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.054570 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.057224 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.113755 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caca46e8-3d11-46fa-9cdf-92e60dfca341-catalog-content\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.113838 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caca46e8-3d11-46fa-9cdf-92e60dfca341-utilities\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.113865 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg72t\" (UniqueName: \"kubernetes.io/projected/caca46e8-3d11-46fa-9cdf-92e60dfca341-kube-api-access-tg72t\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.215398 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caca46e8-3d11-46fa-9cdf-92e60dfca341-catalog-content\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.215759 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caca46e8-3d11-46fa-9cdf-92e60dfca341-utilities\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.215854 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tg72t\" (UniqueName: \"kubernetes.io/projected/caca46e8-3d11-46fa-9cdf-92e60dfca341-kube-api-access-tg72t\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.215899 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caca46e8-3d11-46fa-9cdf-92e60dfca341-catalog-content\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.216145 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caca46e8-3d11-46fa-9cdf-92e60dfca341-utilities\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.236221 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg72t\" (UniqueName: \"kubernetes.io/projected/caca46e8-3d11-46fa-9cdf-92e60dfca341-kube-api-access-tg72t\") pod \"redhat-operators-49llx\" (UID: \"caca46e8-3d11-46fa-9cdf-92e60dfca341\") " pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.373820 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.767239 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49llx"] Feb 19 00:14:53 crc kubenswrapper[5108]: W0219 00:14:53.770639 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaca46e8_3d11_46fa_9cdf_92e60dfca341.slice/crio-007570139d63e70c62c45d17634f67be27c1aca0b6e674b7a0d916015b61522e WatchSource:0}: Error finding container 007570139d63e70c62c45d17634f67be27c1aca0b6e674b7a0d916015b61522e: Status 404 returned error can't find the container with id 007570139d63e70c62c45d17634f67be27c1aca0b6e674b7a0d916015b61522e Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.855009 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aefb89a-2ddc-4334-9bab-28390ba5a389" path="/var/lib/kubelet/pods/0aefb89a-2ddc-4334-9bab-28390ba5a389/volumes" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.856205 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a52a4e5-9502-4222-8090-3c18943abd74" path="/var/lib/kubelet/pods/1a52a4e5-9502-4222-8090-3c18943abd74/volumes" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.856959 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="391cbbed-1038-47a8-aad5-bbe7e5cea901" path="/var/lib/kubelet/pods/391cbbed-1038-47a8-aad5-bbe7e5cea901/volumes" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.858387 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664a83e1-cb9d-4e9d-85c7-88a01dc6d040" path="/var/lib/kubelet/pods/664a83e1-cb9d-4e9d-85c7-88a01dc6d040/volumes" Feb 19 00:14:53 crc kubenswrapper[5108]: I0219 00:14:53.859422 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7024eadd-8a38-49f7-996f-bb49882d226e" path="/var/lib/kubelet/pods/7024eadd-8a38-49f7-996f-bb49882d226e/volumes" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.007952 5108 generic.go:358] "Generic (PLEG): container finished" podID="caca46e8-3d11-46fa-9cdf-92e60dfca341" containerID="63874bfeeaebaa275820000ee808426ba9066d0b07ad5bd85fac1f737d1b43b1" exitCode=0 Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.008052 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49llx" event={"ID":"caca46e8-3d11-46fa-9cdf-92e60dfca341","Type":"ContainerDied","Data":"63874bfeeaebaa275820000ee808426ba9066d0b07ad5bd85fac1f737d1b43b1"} Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.008555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49llx" event={"ID":"caca46e8-3d11-46fa-9cdf-92e60dfca341","Type":"ContainerStarted","Data":"007570139d63e70c62c45d17634f67be27c1aca0b6e674b7a0d916015b61522e"} Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.416506 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9lv87"] Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.423888 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.426396 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.435198 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lv87"] Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.533977 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-utilities\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.534132 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47b7\" (UniqueName: \"kubernetes.io/projected/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-kube-api-access-l47b7\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.534174 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-catalog-content\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.635007 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-utilities\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.635426 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l47b7\" (UniqueName: \"kubernetes.io/projected/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-kube-api-access-l47b7\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.635562 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-catalog-content\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.635562 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-utilities\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.635846 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-catalog-content\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.657117 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l47b7\" (UniqueName: \"kubernetes.io/projected/2cabb708-2cc7-4505-9dae-0d78ce2ed6b0-kube-api-access-l47b7\") pod \"certified-operators-9lv87\" (UID: \"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0\") " pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:54 crc kubenswrapper[5108]: I0219 00:14:54.745179 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.014544 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49llx" event={"ID":"caca46e8-3d11-46fa-9cdf-92e60dfca341","Type":"ContainerStarted","Data":"792cd89325d28cd0cb6b439a96f53b3688c502f01e759e866901f4dc0b4d5761"} Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.163416 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lv87"] Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.417795 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-h6f6p"] Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.426104 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wlkgp"] Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.429968 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.439370 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-h6f6p"] Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.439535 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.439960 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlkgp"] Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.455065 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.548740 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.548796 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a06166f6-76f0-40d8-b336-fa750785f4cf-trusted-ca\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.548820 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-registry-tls\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.548849 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb44j\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-kube-api-access-zb44j\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.548994 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9e03f3-e9a6-482d-a19b-87b2a240761e-catalog-content\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.549028 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a06166f6-76f0-40d8-b336-fa750785f4cf-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.549081 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9e03f3-e9a6-482d-a19b-87b2a240761e-utilities\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.549107 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kfh7\" (UniqueName: \"kubernetes.io/projected/ec9e03f3-e9a6-482d-a19b-87b2a240761e-kube-api-access-5kfh7\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.549190 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a06166f6-76f0-40d8-b336-fa750785f4cf-registry-certificates\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.549226 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a06166f6-76f0-40d8-b336-fa750785f4cf-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.549251 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-bound-sa-token\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.574744 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650241 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a06166f6-76f0-40d8-b336-fa750785f4cf-registry-certificates\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650301 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a06166f6-76f0-40d8-b336-fa750785f4cf-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650333 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-bound-sa-token\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650387 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a06166f6-76f0-40d8-b336-fa750785f4cf-trusted-ca\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650406 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-registry-tls\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650435 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zb44j\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-kube-api-access-zb44j\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650460 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9e03f3-e9a6-482d-a19b-87b2a240761e-catalog-content\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650490 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a06166f6-76f0-40d8-b336-fa750785f4cf-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650513 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9e03f3-e9a6-482d-a19b-87b2a240761e-utilities\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.650553 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5kfh7\" (UniqueName: \"kubernetes.io/projected/ec9e03f3-e9a6-482d-a19b-87b2a240761e-kube-api-access-5kfh7\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.651214 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9e03f3-e9a6-482d-a19b-87b2a240761e-catalog-content\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.651710 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a06166f6-76f0-40d8-b336-fa750785f4cf-trusted-ca\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.651830 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a06166f6-76f0-40d8-b336-fa750785f4cf-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.651865 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9e03f3-e9a6-482d-a19b-87b2a240761e-utilities\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.651896 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a06166f6-76f0-40d8-b336-fa750785f4cf-registry-certificates\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.657599 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a06166f6-76f0-40d8-b336-fa750785f4cf-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.659039 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-registry-tls\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.667865 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-bound-sa-token\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.668081 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kfh7\" (UniqueName: \"kubernetes.io/projected/ec9e03f3-e9a6-482d-a19b-87b2a240761e-kube-api-access-5kfh7\") pod \"community-operators-wlkgp\" (UID: \"ec9e03f3-e9a6-482d-a19b-87b2a240761e\") " pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.671403 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb44j\" (UniqueName: \"kubernetes.io/projected/a06166f6-76f0-40d8-b336-fa750785f4cf-kube-api-access-zb44j\") pod \"image-registry-5d9d95bf5b-h6f6p\" (UID: \"a06166f6-76f0-40d8-b336-fa750785f4cf\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.785607 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:55 crc kubenswrapper[5108]: I0219 00:14:55.788535 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.021758 5108 generic.go:358] "Generic (PLEG): container finished" podID="caca46e8-3d11-46fa-9cdf-92e60dfca341" containerID="792cd89325d28cd0cb6b439a96f53b3688c502f01e759e866901f4dc0b4d5761" exitCode=0 Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.021831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49llx" event={"ID":"caca46e8-3d11-46fa-9cdf-92e60dfca341","Type":"ContainerDied","Data":"792cd89325d28cd0cb6b439a96f53b3688c502f01e759e866901f4dc0b4d5761"} Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.024069 5108 generic.go:358] "Generic (PLEG): container finished" podID="2cabb708-2cc7-4505-9dae-0d78ce2ed6b0" containerID="b3c883ef9e3908f5c4f1057930eebd5aaf99906636378ba8f2a92eaee8ba24d0" exitCode=0 Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.024154 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lv87" event={"ID":"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0","Type":"ContainerDied","Data":"b3c883ef9e3908f5c4f1057930eebd5aaf99906636378ba8f2a92eaee8ba24d0"} Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.024186 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lv87" event={"ID":"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0","Type":"ContainerStarted","Data":"6a717401d70acba2882dbebd3ba94378332c467272411e49999d2d04cc278e79"} Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.190121 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlkgp"] Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.242694 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-h6f6p"] Feb 19 00:14:56 crc kubenswrapper[5108]: W0219 00:14:56.255602 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda06166f6_76f0_40d8_b336_fa750785f4cf.slice/crio-8afc5aa14fa7a5d08ba8de02e4ca25b2a173be2f9c7fa176bc2a26a7e4dcfd58 WatchSource:0}: Error finding container 8afc5aa14fa7a5d08ba8de02e4ca25b2a173be2f9c7fa176bc2a26a7e4dcfd58: Status 404 returned error can't find the container with id 8afc5aa14fa7a5d08ba8de02e4ca25b2a173be2f9c7fa176bc2a26a7e4dcfd58 Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.817623 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwcg"] Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.826510 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.827588 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwcg"] Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.829339 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.866492 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbzv\" (UniqueName: \"kubernetes.io/projected/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-kube-api-access-sfbzv\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.866664 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-utilities\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:56 crc kubenswrapper[5108]: I0219 00:14:56.867013 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-catalog-content\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.409359 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-catalog-content\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.409442 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sfbzv\" (UniqueName: \"kubernetes.io/projected/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-kube-api-access-sfbzv\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.409497 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-utilities\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.410412 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-utilities\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.410478 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-catalog-content\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.427122 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" event={"ID":"a06166f6-76f0-40d8-b336-fa750785f4cf","Type":"ContainerStarted","Data":"022058f0669806eb4aa2b7238aec7d7ee906adfd76120f9006f154b3e4cda842"} Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.427167 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" event={"ID":"a06166f6-76f0-40d8-b336-fa750785f4cf","Type":"ContainerStarted","Data":"8afc5aa14fa7a5d08ba8de02e4ca25b2a173be2f9c7fa176bc2a26a7e4dcfd58"} Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.428030 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.437829 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49llx" event={"ID":"caca46e8-3d11-46fa-9cdf-92e60dfca341","Type":"ContainerStarted","Data":"326d620983a932c600ba8b1072cae55e155ee326d89b7cf2a164f9b63c393de5"} Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.440877 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfbzv\" (UniqueName: \"kubernetes.io/projected/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-kube-api-access-sfbzv\") pod \"redhat-marketplace-nmwcg\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.443586 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lv87" event={"ID":"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0","Type":"ContainerStarted","Data":"8a2db39454746e471a035272d262e9ea926bdbe353884df118a8b229c0aa9378"} Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.445319 5108 generic.go:358] "Generic (PLEG): container finished" podID="ec9e03f3-e9a6-482d-a19b-87b2a240761e" containerID="f11357739646b2227af7f282257de4e00626ba695f1c263429781c125281491d" exitCode=0 Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.445383 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkgp" event={"ID":"ec9e03f3-e9a6-482d-a19b-87b2a240761e","Type":"ContainerDied","Data":"f11357739646b2227af7f282257de4e00626ba695f1c263429781c125281491d"} Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.445402 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkgp" event={"ID":"ec9e03f3-e9a6-482d-a19b-87b2a240761e","Type":"ContainerStarted","Data":"9edc8f75ea29cc8edd74509e218860fa89b333730301205ae8ca2f7ebf9bbb0f"} Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.453788 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.456947 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" podStartSLOduration=2.456909214 podStartE2EDuration="2.456909214s" podCreationTimestamp="2026-02-19 00:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:14:57.450433804 +0000 UTC m=+356.417080122" watchObservedRunningTime="2026-02-19 00:14:57.456909214 +0000 UTC m=+356.423555542" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.509623 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-49llx" podStartSLOduration=3.790757582 podStartE2EDuration="4.509602535s" podCreationTimestamp="2026-02-19 00:14:53 +0000 UTC" firstStartedPulling="2026-02-19 00:14:54.010638727 +0000 UTC m=+352.977285035" lastFinishedPulling="2026-02-19 00:14:54.72948368 +0000 UTC m=+353.696129988" observedRunningTime="2026-02-19 00:14:57.5022379 +0000 UTC m=+356.468884208" watchObservedRunningTime="2026-02-19 00:14:57.509602535 +0000 UTC m=+356.476248843" Feb 19 00:14:57 crc kubenswrapper[5108]: I0219 00:14:57.890023 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwcg"] Feb 19 00:14:58 crc kubenswrapper[5108]: I0219 00:14:58.454245 5108 generic.go:358] "Generic (PLEG): container finished" podID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerID="a1af2bcc7efde802b233bb98b147c48c40aac06ce0c7850550d663894928de3e" exitCode=0 Feb 19 00:14:58 crc kubenswrapper[5108]: I0219 00:14:58.454321 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwcg" event={"ID":"8dcd2a0c-4d54-41aa-b50b-881719d41cbf","Type":"ContainerDied","Data":"a1af2bcc7efde802b233bb98b147c48c40aac06ce0c7850550d663894928de3e"} Feb 19 00:14:58 crc kubenswrapper[5108]: I0219 00:14:58.454739 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwcg" event={"ID":"8dcd2a0c-4d54-41aa-b50b-881719d41cbf","Type":"ContainerStarted","Data":"de67fdf8578c6b6e24b534441bbc03991c37cd3b9c968fc12178adfdb9eea13c"} Feb 19 00:14:58 crc kubenswrapper[5108]: I0219 00:14:58.459786 5108 generic.go:358] "Generic (PLEG): container finished" podID="2cabb708-2cc7-4505-9dae-0d78ce2ed6b0" containerID="8a2db39454746e471a035272d262e9ea926bdbe353884df118a8b229c0aa9378" exitCode=0 Feb 19 00:14:58 crc kubenswrapper[5108]: I0219 00:14:58.460523 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lv87" event={"ID":"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0","Type":"ContainerDied","Data":"8a2db39454746e471a035272d262e9ea926bdbe353884df118a8b229c0aa9378"} Feb 19 00:14:59 crc kubenswrapper[5108]: I0219 00:14:59.465879 5108 generic.go:358] "Generic (PLEG): container finished" podID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerID="23d86ad71724cd131f04da84c62354d388d09f8e5e766f9967965a67327cbaa7" exitCode=0 Feb 19 00:14:59 crc kubenswrapper[5108]: I0219 00:14:59.465920 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwcg" event={"ID":"8dcd2a0c-4d54-41aa-b50b-881719d41cbf","Type":"ContainerDied","Data":"23d86ad71724cd131f04da84c62354d388d09f8e5e766f9967965a67327cbaa7"} Feb 19 00:14:59 crc kubenswrapper[5108]: I0219 00:14:59.468177 5108 generic.go:358] "Generic (PLEG): container finished" podID="ec9e03f3-e9a6-482d-a19b-87b2a240761e" containerID="651b2fad9edad0679069eb148ea7994f48e6ba49d0d54569aadd20c4e54d992c" exitCode=0 Feb 19 00:14:59 crc kubenswrapper[5108]: I0219 00:14:59.468267 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkgp" event={"ID":"ec9e03f3-e9a6-482d-a19b-87b2a240761e","Type":"ContainerDied","Data":"651b2fad9edad0679069eb148ea7994f48e6ba49d0d54569aadd20c4e54d992c"} Feb 19 00:14:59 crc kubenswrapper[5108]: I0219 00:14:59.471735 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lv87" event={"ID":"2cabb708-2cc7-4505-9dae-0d78ce2ed6b0","Type":"ContainerStarted","Data":"58c22aceb379b408d34db384d5c4a550e46a9ff537c9775e5ed5b70435bf8bad"} Feb 19 00:14:59 crc kubenswrapper[5108]: I0219 00:14:59.528426 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9lv87" podStartSLOduration=4.799821107 podStartE2EDuration="5.528405792s" podCreationTimestamp="2026-02-19 00:14:54 +0000 UTC" firstStartedPulling="2026-02-19 00:14:56.02502008 +0000 UTC m=+354.991666388" lastFinishedPulling="2026-02-19 00:14:56.753604765 +0000 UTC m=+355.720251073" observedRunningTime="2026-02-19 00:14:59.526854168 +0000 UTC m=+358.493500486" watchObservedRunningTime="2026-02-19 00:14:59.528405792 +0000 UTC m=+358.495052100" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.132028 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv"] Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.146544 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv"] Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.146708 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.149604 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.149620 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.248839 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73bbdaed-9d1b-4874-8bd9-1fe144126080-secret-volume\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.249037 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb8nz\" (UniqueName: \"kubernetes.io/projected/73bbdaed-9d1b-4874-8bd9-1fe144126080-kube-api-access-wb8nz\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.249105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73bbdaed-9d1b-4874-8bd9-1fe144126080-config-volume\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.350479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73bbdaed-9d1b-4874-8bd9-1fe144126080-secret-volume\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.350555 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wb8nz\" (UniqueName: \"kubernetes.io/projected/73bbdaed-9d1b-4874-8bd9-1fe144126080-kube-api-access-wb8nz\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.350701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73bbdaed-9d1b-4874-8bd9-1fe144126080-config-volume\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.351620 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73bbdaed-9d1b-4874-8bd9-1fe144126080-config-volume\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.357050 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73bbdaed-9d1b-4874-8bd9-1fe144126080-secret-volume\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.366667 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb8nz\" (UniqueName: \"kubernetes.io/projected/73bbdaed-9d1b-4874-8bd9-1fe144126080-kube-api-access-wb8nz\") pod \"collect-profiles-29524335-xqsgv\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.461723 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.480673 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwcg" event={"ID":"8dcd2a0c-4d54-41aa-b50b-881719d41cbf","Type":"ContainerStarted","Data":"f52498423010f44e21439c5b27a29081a680d4feea685d9d9c0d90e0d52d2dfb"} Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.485956 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkgp" event={"ID":"ec9e03f3-e9a6-482d-a19b-87b2a240761e","Type":"ContainerStarted","Data":"c62c8e63e61dbd13a36238fb71e9a4354ad46f8a322e0a3c6ce52e2b185f0d61"} Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.503166 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nmwcg" podStartSLOduration=3.877268378 podStartE2EDuration="4.503148637s" podCreationTimestamp="2026-02-19 00:14:56 +0000 UTC" firstStartedPulling="2026-02-19 00:14:58.45563318 +0000 UTC m=+357.422279488" lastFinishedPulling="2026-02-19 00:14:59.081513419 +0000 UTC m=+358.048159747" observedRunningTime="2026-02-19 00:15:00.501041369 +0000 UTC m=+359.467687687" watchObservedRunningTime="2026-02-19 00:15:00.503148637 +0000 UTC m=+359.469794945" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.523568 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wlkgp" podStartSLOduration=4.65015229 podStartE2EDuration="5.523551377s" podCreationTimestamp="2026-02-19 00:14:55 +0000 UTC" firstStartedPulling="2026-02-19 00:14:57.446071852 +0000 UTC m=+356.412718160" lastFinishedPulling="2026-02-19 00:14:58.319470939 +0000 UTC m=+357.286117247" observedRunningTime="2026-02-19 00:15:00.521484649 +0000 UTC m=+359.488130977" watchObservedRunningTime="2026-02-19 00:15:00.523551377 +0000 UTC m=+359.490197695" Feb 19 00:15:00 crc kubenswrapper[5108]: I0219 00:15:00.696884 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv"] Feb 19 00:15:00 crc kubenswrapper[5108]: W0219 00:15:00.708191 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73bbdaed_9d1b_4874_8bd9_1fe144126080.slice/crio-2c8a0e401b3cea84f86b238f62ec9d2b1ab10cab67a2db31d03dd76f9c0842df WatchSource:0}: Error finding container 2c8a0e401b3cea84f86b238f62ec9d2b1ab10cab67a2db31d03dd76f9c0842df: Status 404 returned error can't find the container with id 2c8a0e401b3cea84f86b238f62ec9d2b1ab10cab67a2db31d03dd76f9c0842df Feb 19 00:15:01 crc kubenswrapper[5108]: I0219 00:15:01.491996 5108 generic.go:358] "Generic (PLEG): container finished" podID="73bbdaed-9d1b-4874-8bd9-1fe144126080" containerID="1a775fc35c64e08a4ff2ee4a9ce3baed14fb18d7b9241f74e6cecffdbf5b63d4" exitCode=0 Feb 19 00:15:01 crc kubenswrapper[5108]: I0219 00:15:01.492073 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" event={"ID":"73bbdaed-9d1b-4874-8bd9-1fe144126080","Type":"ContainerDied","Data":"1a775fc35c64e08a4ff2ee4a9ce3baed14fb18d7b9241f74e6cecffdbf5b63d4"} Feb 19 00:15:01 crc kubenswrapper[5108]: I0219 00:15:01.492374 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" event={"ID":"73bbdaed-9d1b-4874-8bd9-1fe144126080","Type":"ContainerStarted","Data":"2c8a0e401b3cea84f86b238f62ec9d2b1ab10cab67a2db31d03dd76f9c0842df"} Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.751303 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.883829 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73bbdaed-9d1b-4874-8bd9-1fe144126080-config-volume\") pod \"73bbdaed-9d1b-4874-8bd9-1fe144126080\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.883922 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73bbdaed-9d1b-4874-8bd9-1fe144126080-secret-volume\") pod \"73bbdaed-9d1b-4874-8bd9-1fe144126080\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.884007 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb8nz\" (UniqueName: \"kubernetes.io/projected/73bbdaed-9d1b-4874-8bd9-1fe144126080-kube-api-access-wb8nz\") pod \"73bbdaed-9d1b-4874-8bd9-1fe144126080\" (UID: \"73bbdaed-9d1b-4874-8bd9-1fe144126080\") " Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.885546 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73bbdaed-9d1b-4874-8bd9-1fe144126080-config-volume" (OuterVolumeSpecName: "config-volume") pod "73bbdaed-9d1b-4874-8bd9-1fe144126080" (UID: "73bbdaed-9d1b-4874-8bd9-1fe144126080"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.891263 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73bbdaed-9d1b-4874-8bd9-1fe144126080-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "73bbdaed-9d1b-4874-8bd9-1fe144126080" (UID: "73bbdaed-9d1b-4874-8bd9-1fe144126080"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.891735 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73bbdaed-9d1b-4874-8bd9-1fe144126080-kube-api-access-wb8nz" (OuterVolumeSpecName: "kube-api-access-wb8nz") pod "73bbdaed-9d1b-4874-8bd9-1fe144126080" (UID: "73bbdaed-9d1b-4874-8bd9-1fe144126080"). InnerVolumeSpecName "kube-api-access-wb8nz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.986529 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73bbdaed-9d1b-4874-8bd9-1fe144126080-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.986556 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73bbdaed-9d1b-4874-8bd9-1fe144126080-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:02 crc kubenswrapper[5108]: I0219 00:15:02.986565 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wb8nz\" (UniqueName: \"kubernetes.io/projected/73bbdaed-9d1b-4874-8bd9-1fe144126080-kube-api-access-wb8nz\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[5108]: I0219 00:15:03.374784 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:15:03 crc kubenswrapper[5108]: I0219 00:15:03.374829 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:15:03 crc kubenswrapper[5108]: I0219 00:15:03.416242 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:15:03 crc kubenswrapper[5108]: I0219 00:15:03.504422 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" Feb 19 00:15:03 crc kubenswrapper[5108]: I0219 00:15:03.504484 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-xqsgv" event={"ID":"73bbdaed-9d1b-4874-8bd9-1fe144126080","Type":"ContainerDied","Data":"2c8a0e401b3cea84f86b238f62ec9d2b1ab10cab67a2db31d03dd76f9c0842df"} Feb 19 00:15:03 crc kubenswrapper[5108]: I0219 00:15:03.504540 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8a0e401b3cea84f86b238f62ec9d2b1ab10cab67a2db31d03dd76f9c0842df" Feb 19 00:15:03 crc kubenswrapper[5108]: I0219 00:15:03.554481 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49llx" Feb 19 00:15:04 crc kubenswrapper[5108]: I0219 00:15:04.746225 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:15:04 crc kubenswrapper[5108]: I0219 00:15:04.746512 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:15:04 crc kubenswrapper[5108]: I0219 00:15:04.783905 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:15:05 crc kubenswrapper[5108]: I0219 00:15:05.568430 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9lv87" Feb 19 00:15:05 crc kubenswrapper[5108]: I0219 00:15:05.788851 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:15:05 crc kubenswrapper[5108]: I0219 00:15:05.789271 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:15:05 crc kubenswrapper[5108]: I0219 00:15:05.837710 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:15:06 crc kubenswrapper[5108]: I0219 00:15:06.582558 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wlkgp" Feb 19 00:15:07 crc kubenswrapper[5108]: I0219 00:15:07.454649 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:15:07 crc kubenswrapper[5108]: I0219 00:15:07.454742 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:15:07 crc kubenswrapper[5108]: I0219 00:15:07.492659 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:15:07 crc kubenswrapper[5108]: I0219 00:15:07.572116 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:15:19 crc kubenswrapper[5108]: I0219 00:15:19.482713 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-h6f6p" Feb 19 00:15:19 crc kubenswrapper[5108]: I0219 00:15:19.544223 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qv7jb"] Feb 19 00:15:44 crc kubenswrapper[5108]: I0219 00:15:44.597063 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" podUID="223e4146-2005-4ad4-8fff-1d248c0f8a4d" containerName="registry" containerID="cri-o://c46259d94cf35026a5da629070d90e9c663bfd17983df53225b46d36f1c25c1b" gracePeriod=30 Feb 19 00:15:44 crc kubenswrapper[5108]: I0219 00:15:44.762904 5108 generic.go:358] "Generic (PLEG): container finished" podID="223e4146-2005-4ad4-8fff-1d248c0f8a4d" containerID="c46259d94cf35026a5da629070d90e9c663bfd17983df53225b46d36f1c25c1b" exitCode=0 Feb 19 00:15:44 crc kubenswrapper[5108]: I0219 00:15:44.763017 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" event={"ID":"223e4146-2005-4ad4-8fff-1d248c0f8a4d","Type":"ContainerDied","Data":"c46259d94cf35026a5da629070d90e9c663bfd17983df53225b46d36f1c25c1b"} Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.010744 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.049479 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/223e4146-2005-4ad4-8fff-1d248c0f8a4d-ca-trust-extracted\") pod \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.049587 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-tls\") pod \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.049623 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/223e4146-2005-4ad4-8fff-1d248c0f8a4d-installation-pull-secrets\") pod \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.049665 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-bound-sa-token\") pod \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.049747 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tlr2\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-kube-api-access-8tlr2\") pod \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.049797 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-trusted-ca\") pod \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.050027 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.050084 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-certificates\") pod \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\" (UID: \"223e4146-2005-4ad4-8fff-1d248c0f8a4d\") " Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.050783 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "223e4146-2005-4ad4-8fff-1d248c0f8a4d" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.051120 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "223e4146-2005-4ad4-8fff-1d248c0f8a4d" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.058660 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "223e4146-2005-4ad4-8fff-1d248c0f8a4d" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.058879 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223e4146-2005-4ad4-8fff-1d248c0f8a4d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "223e4146-2005-4ad4-8fff-1d248c0f8a4d" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.059002 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "223e4146-2005-4ad4-8fff-1d248c0f8a4d" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.060135 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-kube-api-access-8tlr2" (OuterVolumeSpecName: "kube-api-access-8tlr2") pod "223e4146-2005-4ad4-8fff-1d248c0f8a4d" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d"). InnerVolumeSpecName "kube-api-access-8tlr2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.069594 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/223e4146-2005-4ad4-8fff-1d248c0f8a4d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "223e4146-2005-4ad4-8fff-1d248c0f8a4d" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.074894 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "223e4146-2005-4ad4-8fff-1d248c0f8a4d" (UID: "223e4146-2005-4ad4-8fff-1d248c0f8a4d"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.151166 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/223e4146-2005-4ad4-8fff-1d248c0f8a4d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.151204 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.151213 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8tlr2\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-kube-api-access-8tlr2\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.151222 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.151231 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.151239 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/223e4146-2005-4ad4-8fff-1d248c0f8a4d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.151247 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/223e4146-2005-4ad4-8fff-1d248c0f8a4d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.773349 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.773354 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qv7jb" event={"ID":"223e4146-2005-4ad4-8fff-1d248c0f8a4d","Type":"ContainerDied","Data":"eb6fd57c01bbab0c1ec617f4f32dec2d4f69621d02a1906b5aadfad5ec784999"} Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.773739 5108 scope.go:117] "RemoveContainer" containerID="c46259d94cf35026a5da629070d90e9c663bfd17983df53225b46d36f1c25c1b" Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.836515 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qv7jb"] Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.843171 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qv7jb"] Feb 19 00:15:45 crc kubenswrapper[5108]: I0219 00:15:45.863160 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223e4146-2005-4ad4-8fff-1d248c0f8a4d" path="/var/lib/kubelet/pods/223e4146-2005-4ad4-8fff-1d248c0f8a4d/volumes" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.143741 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524336-74wgt"] Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.146188 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="223e4146-2005-4ad4-8fff-1d248c0f8a4d" containerName="registry" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.146203 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="223e4146-2005-4ad4-8fff-1d248c0f8a4d" containerName="registry" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.146242 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73bbdaed-9d1b-4874-8bd9-1fe144126080" containerName="collect-profiles" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.146249 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="73bbdaed-9d1b-4874-8bd9-1fe144126080" containerName="collect-profiles" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.146365 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="223e4146-2005-4ad4-8fff-1d248c0f8a4d" containerName="registry" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.146377 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="73bbdaed-9d1b-4874-8bd9-1fe144126080" containerName="collect-profiles" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.155852 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524336-74wgt"] Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.155913 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524336-74wgt" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.157975 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.160237 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.160419 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.260099 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl7br\" (UniqueName: \"kubernetes.io/projected/e3f1665b-5fcf-4742-bb14-9479d30e37bc-kube-api-access-kl7br\") pod \"auto-csr-approver-29524336-74wgt\" (UID: \"e3f1665b-5fcf-4742-bb14-9479d30e37bc\") " pod="openshift-infra/auto-csr-approver-29524336-74wgt" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.361660 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kl7br\" (UniqueName: \"kubernetes.io/projected/e3f1665b-5fcf-4742-bb14-9479d30e37bc-kube-api-access-kl7br\") pod \"auto-csr-approver-29524336-74wgt\" (UID: \"e3f1665b-5fcf-4742-bb14-9479d30e37bc\") " pod="openshift-infra/auto-csr-approver-29524336-74wgt" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.399919 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl7br\" (UniqueName: \"kubernetes.io/projected/e3f1665b-5fcf-4742-bb14-9479d30e37bc-kube-api-access-kl7br\") pod \"auto-csr-approver-29524336-74wgt\" (UID: \"e3f1665b-5fcf-4742-bb14-9479d30e37bc\") " pod="openshift-infra/auto-csr-approver-29524336-74wgt" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.479326 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524336-74wgt" Feb 19 00:16:00 crc kubenswrapper[5108]: I0219 00:16:00.902649 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524336-74wgt"] Feb 19 00:16:00 crc kubenswrapper[5108]: W0219 00:16:00.907008 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3f1665b_5fcf_4742_bb14_9479d30e37bc.slice/crio-f1bf1443050972ce7efe95807585541df1e0f67a1abcca9ffe8db6fd1a68a9d4 WatchSource:0}: Error finding container f1bf1443050972ce7efe95807585541df1e0f67a1abcca9ffe8db6fd1a68a9d4: Status 404 returned error can't find the container with id f1bf1443050972ce7efe95807585541df1e0f67a1abcca9ffe8db6fd1a68a9d4 Feb 19 00:16:01 crc kubenswrapper[5108]: I0219 00:16:01.878271 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524336-74wgt" event={"ID":"e3f1665b-5fcf-4742-bb14-9479d30e37bc","Type":"ContainerStarted","Data":"f1bf1443050972ce7efe95807585541df1e0f67a1abcca9ffe8db6fd1a68a9d4"} Feb 19 00:16:03 crc kubenswrapper[5108]: I0219 00:16:03.889987 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524336-74wgt" event={"ID":"e3f1665b-5fcf-4742-bb14-9479d30e37bc","Type":"ContainerStarted","Data":"4bcac3ca558642286f4500ba772d580f9025584bb612c5589709a276d4d591f3"} Feb 19 00:16:03 crc kubenswrapper[5108]: I0219 00:16:03.907152 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524336-74wgt" podStartSLOduration=1.413282556 podStartE2EDuration="3.907136887s" podCreationTimestamp="2026-02-19 00:16:00 +0000 UTC" firstStartedPulling="2026-02-19 00:16:00.908459469 +0000 UTC m=+419.875105777" lastFinishedPulling="2026-02-19 00:16:03.40231381 +0000 UTC m=+422.368960108" observedRunningTime="2026-02-19 00:16:03.903898652 +0000 UTC m=+422.870545000" watchObservedRunningTime="2026-02-19 00:16:03.907136887 +0000 UTC m=+422.873783195" Feb 19 00:16:04 crc kubenswrapper[5108]: I0219 00:16:04.043300 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-dv9l5" Feb 19 00:16:04 crc kubenswrapper[5108]: I0219 00:16:04.063622 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-dv9l5" Feb 19 00:16:04 crc kubenswrapper[5108]: I0219 00:16:04.898067 5108 generic.go:358] "Generic (PLEG): container finished" podID="e3f1665b-5fcf-4742-bb14-9479d30e37bc" containerID="4bcac3ca558642286f4500ba772d580f9025584bb612c5589709a276d4d591f3" exitCode=0 Feb 19 00:16:04 crc kubenswrapper[5108]: I0219 00:16:04.898109 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524336-74wgt" event={"ID":"e3f1665b-5fcf-4742-bb14-9479d30e37bc","Type":"ContainerDied","Data":"4bcac3ca558642286f4500ba772d580f9025584bb612c5589709a276d4d591f3"} Feb 19 00:16:05 crc kubenswrapper[5108]: I0219 00:16:05.066219 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-21 00:11:04 +0000 UTC" deadline="2026-03-13 22:05:29.698624308 +0000 UTC" Feb 19 00:16:05 crc kubenswrapper[5108]: I0219 00:16:05.066260 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="549h49m24.632368396s" Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.067174 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-21 00:11:04 +0000 UTC" deadline="2026-03-16 01:59:44.17646478 +0000 UTC" Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.067238 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="601h43m38.10922983s" Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.145269 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.145366 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.149865 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524336-74wgt" Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.240549 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl7br\" (UniqueName: \"kubernetes.io/projected/e3f1665b-5fcf-4742-bb14-9479d30e37bc-kube-api-access-kl7br\") pod \"e3f1665b-5fcf-4742-bb14-9479d30e37bc\" (UID: \"e3f1665b-5fcf-4742-bb14-9479d30e37bc\") " Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.246196 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f1665b-5fcf-4742-bb14-9479d30e37bc-kube-api-access-kl7br" (OuterVolumeSpecName: "kube-api-access-kl7br") pod "e3f1665b-5fcf-4742-bb14-9479d30e37bc" (UID: "e3f1665b-5fcf-4742-bb14-9479d30e37bc"). InnerVolumeSpecName "kube-api-access-kl7br". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.341787 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kl7br\" (UniqueName: \"kubernetes.io/projected/e3f1665b-5fcf-4742-bb14-9479d30e37bc-kube-api-access-kl7br\") on node \"crc\" DevicePath \"\"" Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.912090 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524336-74wgt" event={"ID":"e3f1665b-5fcf-4742-bb14-9479d30e37bc","Type":"ContainerDied","Data":"f1bf1443050972ce7efe95807585541df1e0f67a1abcca9ffe8db6fd1a68a9d4"} Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.912133 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bf1443050972ce7efe95807585541df1e0f67a1abcca9ffe8db6fd1a68a9d4" Feb 19 00:16:06 crc kubenswrapper[5108]: I0219 00:16:06.912117 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524336-74wgt" Feb 19 00:16:36 crc kubenswrapper[5108]: I0219 00:16:36.145341 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:16:36 crc kubenswrapper[5108]: I0219 00:16:36.146322 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.145601 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.146338 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.146426 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.147507 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fba2b7e8ff51ea182b75c4b0b3700458f1f8f0a3b312a9f4de0528c981dea8d7"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.147562 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://fba2b7e8ff51ea182b75c4b0b3700458f1f8f0a3b312a9f4de0528c981dea8d7" gracePeriod=600 Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.757767 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="fba2b7e8ff51ea182b75c4b0b3700458f1f8f0a3b312a9f4de0528c981dea8d7" exitCode=0 Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.757920 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"fba2b7e8ff51ea182b75c4b0b3700458f1f8f0a3b312a9f4de0528c981dea8d7"} Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.758019 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"647ea21ed953812ffffbc73a5fd69b26af2cf7eb9e570947d57dd504f152834c"} Feb 19 00:17:06 crc kubenswrapper[5108]: I0219 00:17:06.758045 5108 scope.go:117] "RemoveContainer" containerID="9b8644414b23c69cc69ee1daf8f442b3f33a0c424abf081e0b094c5eb0209682" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.152642 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524338-xmd4w"] Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.154345 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3f1665b-5fcf-4742-bb14-9479d30e37bc" containerName="oc" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.154370 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f1665b-5fcf-4742-bb14-9479d30e37bc" containerName="oc" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.154560 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3f1665b-5fcf-4742-bb14-9479d30e37bc" containerName="oc" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.164146 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.165266 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524338-xmd4w"] Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.165916 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.166559 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.167814 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.314007 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5wx4\" (UniqueName: \"kubernetes.io/projected/11b07faf-6463-47aa-9306-e36be1281fc5-kube-api-access-v5wx4\") pod \"auto-csr-approver-29524338-xmd4w\" (UID: \"11b07faf-6463-47aa-9306-e36be1281fc5\") " pod="openshift-infra/auto-csr-approver-29524338-xmd4w" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.415471 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5wx4\" (UniqueName: \"kubernetes.io/projected/11b07faf-6463-47aa-9306-e36be1281fc5-kube-api-access-v5wx4\") pod \"auto-csr-approver-29524338-xmd4w\" (UID: \"11b07faf-6463-47aa-9306-e36be1281fc5\") " pod="openshift-infra/auto-csr-approver-29524338-xmd4w" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.459693 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5wx4\" (UniqueName: \"kubernetes.io/projected/11b07faf-6463-47aa-9306-e36be1281fc5-kube-api-access-v5wx4\") pod \"auto-csr-approver-29524338-xmd4w\" (UID: \"11b07faf-6463-47aa-9306-e36be1281fc5\") " pod="openshift-infra/auto-csr-approver-29524338-xmd4w" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.527019 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" Feb 19 00:18:00 crc kubenswrapper[5108]: I0219 00:18:00.742895 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524338-xmd4w"] Feb 19 00:18:01 crc kubenswrapper[5108]: I0219 00:18:01.129444 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" event={"ID":"11b07faf-6463-47aa-9306-e36be1281fc5","Type":"ContainerStarted","Data":"ca441a6de6f34fa920940ebd0e9f001814ee71428ff3265b05d7a2319ff69861"} Feb 19 00:18:02 crc kubenswrapper[5108]: I0219 00:18:02.134781 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" event={"ID":"11b07faf-6463-47aa-9306-e36be1281fc5","Type":"ContainerStarted","Data":"74e1f2b2f42d99655fc66868599149aa436a7aa2f3974ea189c471fbdbce79d7"} Feb 19 00:18:02 crc kubenswrapper[5108]: I0219 00:18:02.151183 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" podStartSLOduration=1.247243061 podStartE2EDuration="2.151162154s" podCreationTimestamp="2026-02-19 00:18:00 +0000 UTC" firstStartedPulling="2026-02-19 00:18:00.746631333 +0000 UTC m=+539.713277641" lastFinishedPulling="2026-02-19 00:18:01.650550376 +0000 UTC m=+540.617196734" observedRunningTime="2026-02-19 00:18:02.146396502 +0000 UTC m=+541.113042810" watchObservedRunningTime="2026-02-19 00:18:02.151162154 +0000 UTC m=+541.117808472" Feb 19 00:18:03 crc kubenswrapper[5108]: I0219 00:18:03.140591 5108 generic.go:358] "Generic (PLEG): container finished" podID="11b07faf-6463-47aa-9306-e36be1281fc5" containerID="74e1f2b2f42d99655fc66868599149aa436a7aa2f3974ea189c471fbdbce79d7" exitCode=0 Feb 19 00:18:03 crc kubenswrapper[5108]: I0219 00:18:03.140721 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" event={"ID":"11b07faf-6463-47aa-9306-e36be1281fc5","Type":"ContainerDied","Data":"74e1f2b2f42d99655fc66868599149aa436a7aa2f3974ea189c471fbdbce79d7"} Feb 19 00:18:04 crc kubenswrapper[5108]: I0219 00:18:04.355870 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" Feb 19 00:18:04 crc kubenswrapper[5108]: I0219 00:18:04.362038 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5wx4\" (UniqueName: \"kubernetes.io/projected/11b07faf-6463-47aa-9306-e36be1281fc5-kube-api-access-v5wx4\") pod \"11b07faf-6463-47aa-9306-e36be1281fc5\" (UID: \"11b07faf-6463-47aa-9306-e36be1281fc5\") " Feb 19 00:18:04 crc kubenswrapper[5108]: I0219 00:18:04.373850 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b07faf-6463-47aa-9306-e36be1281fc5-kube-api-access-v5wx4" (OuterVolumeSpecName: "kube-api-access-v5wx4") pod "11b07faf-6463-47aa-9306-e36be1281fc5" (UID: "11b07faf-6463-47aa-9306-e36be1281fc5"). InnerVolumeSpecName "kube-api-access-v5wx4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:18:04 crc kubenswrapper[5108]: I0219 00:18:04.463625 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5wx4\" (UniqueName: \"kubernetes.io/projected/11b07faf-6463-47aa-9306-e36be1281fc5-kube-api-access-v5wx4\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:05 crc kubenswrapper[5108]: I0219 00:18:05.158131 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" event={"ID":"11b07faf-6463-47aa-9306-e36be1281fc5","Type":"ContainerDied","Data":"ca441a6de6f34fa920940ebd0e9f001814ee71428ff3265b05d7a2319ff69861"} Feb 19 00:18:05 crc kubenswrapper[5108]: I0219 00:18:05.158184 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca441a6de6f34fa920940ebd0e9f001814ee71428ff3265b05d7a2319ff69861" Feb 19 00:18:05 crc kubenswrapper[5108]: I0219 00:18:05.158254 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524338-xmd4w" Feb 19 00:19:02 crc kubenswrapper[5108]: I0219 00:19:02.145172 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:19:02 crc kubenswrapper[5108]: I0219 00:19:02.150002 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:19:06 crc kubenswrapper[5108]: I0219 00:19:06.145486 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:19:06 crc kubenswrapper[5108]: I0219 00:19:06.145845 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.789172 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4"] Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.789986 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" containerName="kube-rbac-proxy" containerID="cri-o://752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00" gracePeriod=30 Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.790006 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" containerName="ovnkube-cluster-manager" containerID="cri-o://d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c" gracePeriod=30 Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.978073 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.989862 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vk6d6"] Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.990502 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="nbdb" containerID="cri-o://c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" gracePeriod=30 Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.990538 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovn-controller" containerID="cri-o://61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94" gracePeriod=30 Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.990543 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" gracePeriod=30 Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.990509 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="sbdb" containerID="cri-o://9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" gracePeriod=30 Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.990646 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="northd" containerID="cri-o://e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" gracePeriod=30 Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.990651 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kube-rbac-proxy-node" containerID="cri-o://97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" gracePeriod=30 Feb 19 00:19:24 crc kubenswrapper[5108]: I0219 00:19:24.990674 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovn-acl-logging" containerID="cri-o://06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5" gracePeriod=30 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.011818 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64"] Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.012743 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" containerName="ovnkube-cluster-manager" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.012835 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" containerName="ovnkube-cluster-manager" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.012911 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" containerName="kube-rbac-proxy" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.013014 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" containerName="kube-rbac-proxy" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.013111 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11b07faf-6463-47aa-9306-e36be1281fc5" containerName="oc" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.013180 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b07faf-6463-47aa-9306-e36be1281fc5" containerName="oc" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.013371 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="11b07faf-6463-47aa-9306-e36be1281fc5" containerName="oc" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.013448 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" containerName="kube-rbac-proxy" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.013510 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" containerName="ovnkube-cluster-manager" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.020472 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.020902 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovnkube-controller" containerID="cri-o://1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" gracePeriod=30 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.071418 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-ovnkube-config\") pod \"c556da79-b025-425f-b2cd-ac55950c66cc\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.071501 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-env-overrides\") pod \"c556da79-b025-425f-b2cd-ac55950c66cc\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.071600 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2dh5\" (UniqueName: \"kubernetes.io/projected/c556da79-b025-425f-b2cd-ac55950c66cc-kube-api-access-q2dh5\") pod \"c556da79-b025-425f-b2cd-ac55950c66cc\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.071693 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c556da79-b025-425f-b2cd-ac55950c66cc-ovn-control-plane-metrics-cert\") pod \"c556da79-b025-425f-b2cd-ac55950c66cc\" (UID: \"c556da79-b025-425f-b2cd-ac55950c66cc\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.072594 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c556da79-b025-425f-b2cd-ac55950c66cc" (UID: "c556da79-b025-425f-b2cd-ac55950c66cc"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.078012 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c556da79-b025-425f-b2cd-ac55950c66cc-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "c556da79-b025-425f-b2cd-ac55950c66cc" (UID: "c556da79-b025-425f-b2cd-ac55950c66cc"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.079663 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c556da79-b025-425f-b2cd-ac55950c66cc" (UID: "c556da79-b025-425f-b2cd-ac55950c66cc"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.080417 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c556da79-b025-425f-b2cd-ac55950c66cc-kube-api-access-q2dh5" (OuterVolumeSpecName: "kube-api-access-q2dh5") pod "c556da79-b025-425f-b2cd-ac55950c66cc" (UID: "c556da79-b025-425f-b2cd-ac55950c66cc"). InnerVolumeSpecName "kube-api-access-q2dh5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.173100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcc712b8-ef88-413f-9740-3209ad031c16-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.173247 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcc712b8-ef88-413f-9740-3209ad031c16-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.173345 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjmq\" (UniqueName: \"kubernetes.io/projected/bcc712b8-ef88-413f-9740-3209ad031c16-kube-api-access-xcjmq\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.173373 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcc712b8-ef88-413f-9740-3209ad031c16-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.173628 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.173689 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c556da79-b025-425f-b2cd-ac55950c66cc-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.173700 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q2dh5\" (UniqueName: \"kubernetes.io/projected/c556da79-b025-425f-b2cd-ac55950c66cc-kube-api-access-q2dh5\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.173711 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c556da79-b025-425f-b2cd-ac55950c66cc-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.274819 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcc712b8-ef88-413f-9740-3209ad031c16-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.274879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcc712b8-ef88-413f-9740-3209ad031c16-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.274917 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjmq\" (UniqueName: \"kubernetes.io/projected/bcc712b8-ef88-413f-9740-3209ad031c16-kube-api-access-xcjmq\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.275106 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcc712b8-ef88-413f-9740-3209ad031c16-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.276621 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcc712b8-ef88-413f-9740-3209ad031c16-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.276623 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcc712b8-ef88-413f-9740-3209ad031c16-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.279648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcc712b8-ef88-413f-9740-3209ad031c16-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.291796 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vk6d6_7f4459ce-0bd5-493a-813f-977d6e26f440/ovn-acl-logging/0.log" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.292354 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vk6d6_7f4459ce-0bd5-493a-813f-977d6e26f440/ovn-controller/0.log" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.292729 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjmq\" (UniqueName: \"kubernetes.io/projected/bcc712b8-ef88-413f-9740-3209ad031c16-kube-api-access-xcjmq\") pod \"ovnkube-control-plane-97c9b6c48-vjm64\" (UID: \"bcc712b8-ef88-413f-9740-3209ad031c16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.292771 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.337130 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4bf9f"] Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338055 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovn-controller" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338081 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovn-controller" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338093 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kubecfg-setup" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338105 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kubecfg-setup" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338117 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338126 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338142 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="sbdb" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338149 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="sbdb" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338166 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="nbdb" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338173 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="nbdb" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338189 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kube-rbac-proxy-node" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338198 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kube-rbac-proxy-node" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338217 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovnkube-controller" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338227 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovnkube-controller" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338237 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovn-acl-logging" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338245 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovn-acl-logging" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338264 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="northd" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338272 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="northd" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338366 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovn-controller" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338381 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="nbdb" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338392 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="sbdb" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338404 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovnkube-controller" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338417 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="northd" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338431 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="ovn-acl-logging" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338441 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.338450 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerName="kube-rbac-proxy-node" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.344451 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376379 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-slash\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376415 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-ovn-kubernetes\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376445 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-netd\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376466 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-etc-openvswitch\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376501 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-var-lib-openvswitch\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376562 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-slash" (OuterVolumeSpecName: "host-slash") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376586 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376619 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-config\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376579 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376664 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376727 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-node-log" (OuterVolumeSpecName: "node-log") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377563 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.376646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-node-log\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377626 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-bin\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377697 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-systemd-units\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377751 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-env-overrides\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377783 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377802 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377840 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-ovn\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377961 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378206 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.377858 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdz5q\" (UniqueName: \"kubernetes.io/projected/7f4459ce-0bd5-493a-813f-977d6e26f440-kube-api-access-cdz5q\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378255 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-systemd\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378269 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-openvswitch\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378293 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-kubelet\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378311 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f4459ce-0bd5-493a-813f-977d6e26f440-ovn-node-metrics-cert\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378332 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-script-lib\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378451 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378512 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378707 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-log-socket\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378738 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-netns\") pod \"7f4459ce-0bd5-493a-813f-977d6e26f440\" (UID: \"7f4459ce-0bd5-493a-813f-977d6e26f440\") " Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.378805 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-log-socket" (OuterVolumeSpecName: "log-socket") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379039 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379081 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379171 5108 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379222 5108 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379234 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379244 5108 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379253 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379261 5108 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379269 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379278 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379286 5108 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379296 5108 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379305 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379313 5108 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379322 5108 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379330 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379338 5108 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379346 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7f4459ce-0bd5-493a-813f-977d6e26f440-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.379354 5108 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.381764 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4459ce-0bd5-493a-813f-977d6e26f440-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.382425 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4459ce-0bd5-493a-813f-977d6e26f440-kube-api-access-cdz5q" (OuterVolumeSpecName: "kube-api-access-cdz5q") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "kube-api-access-cdz5q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.388867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7f4459ce-0bd5-493a-813f-977d6e26f440" (UID: "7f4459ce-0bd5-493a-813f-977d6e26f440"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.445633 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.464794 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480411 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-systemd-units\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480482 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-run-netns\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480502 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-env-overrides\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480526 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480551 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-slash\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480577 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-systemd\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480598 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-run-ovn-kubernetes\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480617 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovnkube-config\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480659 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-cni-netd\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480681 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-ovn\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480700 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovnkube-script-lib\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480725 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480764 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-node-log\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480793 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-etc-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480819 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovn-node-metrics-cert\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480850 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-cni-bin\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480879 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-log-socket\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480911 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5bhp\" (UniqueName: \"kubernetes.io/projected/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-kube-api-access-r5bhp\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.480955 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-var-lib-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.481000 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-kubelet\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.481044 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7f4459ce-0bd5-493a-813f-977d6e26f440-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.481058 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cdz5q\" (UniqueName: \"kubernetes.io/projected/7f4459ce-0bd5-493a-813f-977d6e26f440-kube-api-access-cdz5q\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.481069 5108 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7f4459ce-0bd5-493a-813f-977d6e26f440-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.581805 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-kubelet\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.581855 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-systemd-units\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582019 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-kubelet\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-run-netns\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582120 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-run-netns\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582153 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-env-overrides\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582112 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-systemd-units\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582219 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582271 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-slash\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582330 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582340 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-systemd\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582369 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-systemd\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582381 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-run-ovn-kubernetes\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582409 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovnkube-config\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582428 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-run-ovn-kubernetes\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582440 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-cni-netd\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582394 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-slash\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582458 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-ovn\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582474 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovnkube-script-lib\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582491 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582516 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-node-log\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582536 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-etc-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovn-node-metrics-cert\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582577 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-cni-bin\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582592 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-log-socket\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582616 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r5bhp\" (UniqueName: \"kubernetes.io/projected/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-kube-api-access-r5bhp\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582637 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-var-lib-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582705 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-var-lib-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582477 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-cni-netd\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582738 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582494 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-run-ovn\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582764 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-node-log\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.582785 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-etc-openvswitch\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.583051 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-env-overrides\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.583120 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-log-socket\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.583122 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovnkube-script-lib\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.583129 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovnkube-config\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.583170 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-host-cni-bin\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.590404 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-ovn-node-metrics-cert\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.602696 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5bhp\" (UniqueName: \"kubernetes.io/projected/6c2b94d2-56b6-460c-af3e-7519d4ac9b54-kube-api-access-r5bhp\") pod \"ovnkube-node-4bf9f\" (UID: \"6c2b94d2-56b6-460c-af3e-7519d4ac9b54\") " pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.660837 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.748440 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" event={"ID":"bcc712b8-ef88-413f-9740-3209ad031c16","Type":"ContainerStarted","Data":"2c772b9fc68ffbd9076b946c53da83ecc45dad7027fb16621a7a1c3c7cbd2d27"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.753002 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vk6d6_7f4459ce-0bd5-493a-813f-977d6e26f440/ovn-acl-logging/0.log" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.754863 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vk6d6_7f4459ce-0bd5-493a-813f-977d6e26f440/ovn-controller/0.log" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756198 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" exitCode=0 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756228 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" exitCode=0 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756236 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" exitCode=0 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756272 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" exitCode=0 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756280 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" exitCode=0 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756290 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" exitCode=0 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756297 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5" exitCode=143 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756305 5108 generic.go:358] "Generic (PLEG): container finished" podID="7f4459ce-0bd5-493a-813f-977d6e26f440" containerID="61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94" exitCode=143 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756455 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756468 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756605 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756619 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756644 5108 scope.go:117] "RemoveContainer" containerID="1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756700 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756714 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756744 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756753 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756759 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756767 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756782 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756789 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756794 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756798 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756821 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756826 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756831 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756835 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756840 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756848 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756856 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756862 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756866 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756871 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756876 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756896 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756901 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756907 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756912 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756920 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vk6d6" event={"ID":"7f4459ce-0bd5-493a-813f-977d6e26f440","Type":"ContainerDied","Data":"97ba31899a8bf93cbd0751220565b42a6c3d1a45ebe6f0d259c53cf41d8bb36f"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756953 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756962 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756969 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756976 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756988 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.756995 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.757002 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.757008 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.757040 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.785397 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.785634 5108 generic.go:358] "Generic (PLEG): container finished" podID="c8ba935e-bb01-466a-8b94-8b0c15e535b1" containerID="b3e13291cabcf2b49b52130be3a87674eb083bb09308bb95aea7cc9e7a4c8884" exitCode=2 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.785726 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v42mj" event={"ID":"c8ba935e-bb01-466a-8b94-8b0c15e535b1","Type":"ContainerDied","Data":"b3e13291cabcf2b49b52130be3a87674eb083bb09308bb95aea7cc9e7a4c8884"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.786247 5108 scope.go:117] "RemoveContainer" containerID="b3e13291cabcf2b49b52130be3a87674eb083bb09308bb95aea7cc9e7a4c8884" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790691 5108 generic.go:358] "Generic (PLEG): container finished" podID="c556da79-b025-425f-b2cd-ac55950c66cc" containerID="d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c" exitCode=0 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790721 5108 generic.go:358] "Generic (PLEG): container finished" podID="c556da79-b025-425f-b2cd-ac55950c66cc" containerID="752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00" exitCode=0 Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790793 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" event={"ID":"c556da79-b025-425f-b2cd-ac55950c66cc","Type":"ContainerDied","Data":"d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790815 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790824 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790834 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" event={"ID":"c556da79-b025-425f-b2cd-ac55950c66cc","Type":"ContainerDied","Data":"752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790841 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790846 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790852 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" event={"ID":"c556da79-b025-425f-b2cd-ac55950c66cc","Type":"ContainerDied","Data":"c98da5235f802bdfdfd12b902f8fa68fcd0688f85988187de3b6742eadfe9a59"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790859 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790865 5108 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.790984 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.798286 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"7681ab6ee279f351fb38ef0ee148344482395eebb3cd4e153e29b376152c0939"} Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.811700 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vk6d6"] Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.815846 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vk6d6"] Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.837029 5108 scope.go:117] "RemoveContainer" containerID="9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.853677 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4459ce-0bd5-493a-813f-977d6e26f440" path="/var/lib/kubelet/pods/7f4459ce-0bd5-493a-813f-977d6e26f440/volumes" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.889252 5108 scope.go:117] "RemoveContainer" containerID="c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.904844 5108 scope.go:117] "RemoveContainer" containerID="e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.918789 5108 scope.go:117] "RemoveContainer" containerID="cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" Feb 19 00:19:25 crc kubenswrapper[5108]: I0219 00:19:25.978327 5108 scope.go:117] "RemoveContainer" containerID="97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.001892 5108 scope.go:117] "RemoveContainer" containerID="06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.022179 5108 scope.go:117] "RemoveContainer" containerID="61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.044727 5108 scope.go:117] "RemoveContainer" containerID="acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.065563 5108 scope.go:117] "RemoveContainer" containerID="1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.066237 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": container with ID starting with 1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8 not found: ID does not exist" containerID="1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.066280 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} err="failed to get container status \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": rpc error: code = NotFound desc = could not find container \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": container with ID starting with 1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.066322 5108 scope.go:117] "RemoveContainer" containerID="9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.066876 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": container with ID starting with 9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482 not found: ID does not exist" containerID="9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.066915 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} err="failed to get container status \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": rpc error: code = NotFound desc = could not find container \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": container with ID starting with 9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.066949 5108 scope.go:117] "RemoveContainer" containerID="c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.067545 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": container with ID starting with c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271 not found: ID does not exist" containerID="c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.067627 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} err="failed to get container status \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": rpc error: code = NotFound desc = could not find container \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": container with ID starting with c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.067666 5108 scope.go:117] "RemoveContainer" containerID="e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.068159 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": container with ID starting with e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2 not found: ID does not exist" containerID="e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.068204 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} err="failed to get container status \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": rpc error: code = NotFound desc = could not find container \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": container with ID starting with e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.068235 5108 scope.go:117] "RemoveContainer" containerID="cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.068693 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": container with ID starting with cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa not found: ID does not exist" containerID="cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.068726 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} err="failed to get container status \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": rpc error: code = NotFound desc = could not find container \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": container with ID starting with cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.068747 5108 scope.go:117] "RemoveContainer" containerID="97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.068993 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": container with ID starting with 97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567 not found: ID does not exist" containerID="97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.069026 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} err="failed to get container status \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": rpc error: code = NotFound desc = could not find container \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": container with ID starting with 97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.069044 5108 scope.go:117] "RemoveContainer" containerID="06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.069253 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": container with ID starting with 06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5 not found: ID does not exist" containerID="06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.069284 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} err="failed to get container status \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": rpc error: code = NotFound desc = could not find container \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": container with ID starting with 06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.069301 5108 scope.go:117] "RemoveContainer" containerID="61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.070425 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": container with ID starting with 61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94 not found: ID does not exist" containerID="61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.070451 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} err="failed to get container status \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": rpc error: code = NotFound desc = could not find container \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": container with ID starting with 61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.070466 5108 scope.go:117] "RemoveContainer" containerID="acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c" Feb 19 00:19:26 crc kubenswrapper[5108]: E0219 00:19:26.071051 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": container with ID starting with acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c not found: ID does not exist" containerID="acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.071079 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} err="failed to get container status \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": rpc error: code = NotFound desc = could not find container \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": container with ID starting with acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.071099 5108 scope.go:117] "RemoveContainer" containerID="1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.071358 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} err="failed to get container status \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": rpc error: code = NotFound desc = could not find container \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": container with ID starting with 1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.071375 5108 scope.go:117] "RemoveContainer" containerID="9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.071644 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} err="failed to get container status \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": rpc error: code = NotFound desc = could not find container \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": container with ID starting with 9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.071665 5108 scope.go:117] "RemoveContainer" containerID="c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.071865 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} err="failed to get container status \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": rpc error: code = NotFound desc = could not find container \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": container with ID starting with c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.071891 5108 scope.go:117] "RemoveContainer" containerID="e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.072124 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} err="failed to get container status \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": rpc error: code = NotFound desc = could not find container \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": container with ID starting with e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.072149 5108 scope.go:117] "RemoveContainer" containerID="cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.072330 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} err="failed to get container status \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": rpc error: code = NotFound desc = could not find container \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": container with ID starting with cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.072347 5108 scope.go:117] "RemoveContainer" containerID="97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.072654 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} err="failed to get container status \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": rpc error: code = NotFound desc = could not find container \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": container with ID starting with 97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.072682 5108 scope.go:117] "RemoveContainer" containerID="06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.072997 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} err="failed to get container status \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": rpc error: code = NotFound desc = could not find container \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": container with ID starting with 06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.073022 5108 scope.go:117] "RemoveContainer" containerID="61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.073291 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} err="failed to get container status \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": rpc error: code = NotFound desc = could not find container \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": container with ID starting with 61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.073313 5108 scope.go:117] "RemoveContainer" containerID="acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.073551 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} err="failed to get container status \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": rpc error: code = NotFound desc = could not find container \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": container with ID starting with acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.073577 5108 scope.go:117] "RemoveContainer" containerID="1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.073843 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} err="failed to get container status \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": rpc error: code = NotFound desc = could not find container \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": container with ID starting with 1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.073878 5108 scope.go:117] "RemoveContainer" containerID="9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.074214 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} err="failed to get container status \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": rpc error: code = NotFound desc = could not find container \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": container with ID starting with 9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.074232 5108 scope.go:117] "RemoveContainer" containerID="c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.074594 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} err="failed to get container status \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": rpc error: code = NotFound desc = could not find container \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": container with ID starting with c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.074646 5108 scope.go:117] "RemoveContainer" containerID="e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.076368 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} err="failed to get container status \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": rpc error: code = NotFound desc = could not find container \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": container with ID starting with e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.076396 5108 scope.go:117] "RemoveContainer" containerID="cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.076644 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} err="failed to get container status \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": rpc error: code = NotFound desc = could not find container \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": container with ID starting with cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.076658 5108 scope.go:117] "RemoveContainer" containerID="97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.076960 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} err="failed to get container status \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": rpc error: code = NotFound desc = could not find container \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": container with ID starting with 97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.077008 5108 scope.go:117] "RemoveContainer" containerID="06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.077306 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} err="failed to get container status \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": rpc error: code = NotFound desc = could not find container \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": container with ID starting with 06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.077341 5108 scope.go:117] "RemoveContainer" containerID="61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.077592 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} err="failed to get container status \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": rpc error: code = NotFound desc = could not find container \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": container with ID starting with 61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.077615 5108 scope.go:117] "RemoveContainer" containerID="acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.077847 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} err="failed to get container status \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": rpc error: code = NotFound desc = could not find container \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": container with ID starting with acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.077870 5108 scope.go:117] "RemoveContainer" containerID="1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.078188 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} err="failed to get container status \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": rpc error: code = NotFound desc = could not find container \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": container with ID starting with 1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.078218 5108 scope.go:117] "RemoveContainer" containerID="9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.078455 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} err="failed to get container status \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": rpc error: code = NotFound desc = could not find container \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": container with ID starting with 9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.078479 5108 scope.go:117] "RemoveContainer" containerID="c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.078712 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} err="failed to get container status \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": rpc error: code = NotFound desc = could not find container \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": container with ID starting with c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.078735 5108 scope.go:117] "RemoveContainer" containerID="e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.078993 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} err="failed to get container status \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": rpc error: code = NotFound desc = could not find container \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": container with ID starting with e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.079017 5108 scope.go:117] "RemoveContainer" containerID="cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.079248 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} err="failed to get container status \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": rpc error: code = NotFound desc = could not find container \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": container with ID starting with cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.079272 5108 scope.go:117] "RemoveContainer" containerID="97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.079542 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} err="failed to get container status \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": rpc error: code = NotFound desc = could not find container \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": container with ID starting with 97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.079565 5108 scope.go:117] "RemoveContainer" containerID="06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.079790 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5"} err="failed to get container status \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": rpc error: code = NotFound desc = could not find container \"06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5\": container with ID starting with 06d244085bb0d68802cf6de012c5b29038d13e6194f225969586d26bf17c2ea5 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.079814 5108 scope.go:117] "RemoveContainer" containerID="61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.080126 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94"} err="failed to get container status \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": rpc error: code = NotFound desc = could not find container \"61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94\": container with ID starting with 61a5d47aad4f449a17d392cd0c81e37eac0825bc03ff3803c773fb451bf71a94 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.080152 5108 scope.go:117] "RemoveContainer" containerID="acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.080395 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c"} err="failed to get container status \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": rpc error: code = NotFound desc = could not find container \"acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c\": container with ID starting with acf4e2afc948f1ce4537ca4ab37c9fe1885727d9a2111b6f45408341cb66b62c not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.080420 5108 scope.go:117] "RemoveContainer" containerID="1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.080660 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8"} err="failed to get container status \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": rpc error: code = NotFound desc = could not find container \"1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8\": container with ID starting with 1f016d312a34bc9751b3b23c3757a52b60daf78b05de0aa900e9f3c9024411f8 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.080701 5108 scope.go:117] "RemoveContainer" containerID="9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.080968 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482"} err="failed to get container status \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": rpc error: code = NotFound desc = could not find container \"9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482\": container with ID starting with 9aa52b27aa896ff53449c571f5d194e204403c75011b7aa70f15e6cf73e6e482 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.080991 5108 scope.go:117] "RemoveContainer" containerID="c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.081237 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271"} err="failed to get container status \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": rpc error: code = NotFound desc = could not find container \"c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271\": container with ID starting with c1ad3c9b8c285074da2cfbb3cc3ba58ed52190e55970201358d0aab2ddcfe271 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.081259 5108 scope.go:117] "RemoveContainer" containerID="e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.081502 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2"} err="failed to get container status \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": rpc error: code = NotFound desc = could not find container \"e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2\": container with ID starting with e5516902d73023e0d482251eba1d04ec0d156b1caefb647302ee6a71988c40d2 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.081525 5108 scope.go:117] "RemoveContainer" containerID="cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.081760 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa"} err="failed to get container status \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": rpc error: code = NotFound desc = could not find container \"cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa\": container with ID starting with cfc2e10e9fb21227ff88bc441ef768ca51dca4147b121fbcc57d70bab63f5daa not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.081782 5108 scope.go:117] "RemoveContainer" containerID="97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.082070 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567"} err="failed to get container status \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": rpc error: code = NotFound desc = could not find container \"97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567\": container with ID starting with 97157ee0408cb0d9d113d4b1c3b86642012bda3679186952b8ad8cab731f4567 not found: ID does not exist" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.806877 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" event={"ID":"bcc712b8-ef88-413f-9740-3209ad031c16","Type":"ContainerStarted","Data":"6394efc4aea5d51632eeddebc31c39c8cbcde21bc45e9716773f8e403185ad68"} Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.806927 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" event={"ID":"bcc712b8-ef88-413f-9740-3209ad031c16","Type":"ContainerStarted","Data":"29128d99e7079293efb558b2d6be230a1c6311fa5e54e611c0bb0b4843c2df9c"} Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.810017 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.810157 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v42mj" event={"ID":"c8ba935e-bb01-466a-8b94-8b0c15e535b1","Type":"ContainerStarted","Data":"dae5c673a89bbac57165c775ab4a493c1150c977afc6139d6f27396ecb83ef8f"} Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.812570 5108 generic.go:358] "Generic (PLEG): container finished" podID="6c2b94d2-56b6-460c-af3e-7519d4ac9b54" containerID="d45baf4cdd542aa05604618e5eb64f67c992a0e45c10f88f21a2bda16f857949" exitCode=0 Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.812644 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerDied","Data":"d45baf4cdd542aa05604618e5eb64f67c992a0e45c10f88f21a2bda16f857949"} Feb 19 00:19:26 crc kubenswrapper[5108]: I0219 00:19:26.830952 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vjm64" podStartSLOduration=2.830911998 podStartE2EDuration="2.830911998s" podCreationTimestamp="2026-02-19 00:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:19:26.824617927 +0000 UTC m=+625.791264245" watchObservedRunningTime="2026-02-19 00:19:26.830911998 +0000 UTC m=+625.797558316" Feb 19 00:19:27 crc kubenswrapper[5108]: I0219 00:19:27.823463 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"48aef92cc3269dc3fa87035dd99dc646e142f374a06d765f75df9e49981e0601"} Feb 19 00:19:27 crc kubenswrapper[5108]: I0219 00:19:27.823515 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"8a26c36cdbb56c5afd565ff9014553e214c420eac5be6263442964a7fe80bbdb"} Feb 19 00:19:27 crc kubenswrapper[5108]: I0219 00:19:27.823527 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"e9c8ec4d3b22e88ee74e404c8b3aba3ebb168fdcd834414203d7bafd79a936b1"} Feb 19 00:19:27 crc kubenswrapper[5108]: I0219 00:19:27.823537 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"52d6c0ed926fb32902641c1ec35eefa7aef1bf2518f86b6a09c0a8be30a11116"} Feb 19 00:19:27 crc kubenswrapper[5108]: I0219 00:19:27.823548 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"0b877c92a0b4cf6e05b12a243375f2c76d772dcf5118d0309a85ae9cd95b8fa9"} Feb 19 00:19:27 crc kubenswrapper[5108]: I0219 00:19:27.823557 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"daad3712ef304d51904bc6a2a23510069cdf461554c34dd3edc604d02fc3ffc9"} Feb 19 00:19:30 crc kubenswrapper[5108]: I0219 00:19:30.857143 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"d86cfb88fa8bb7fa50201bcd6ace8795674211cc0b3a3194ce920e11276d2f17"} Feb 19 00:19:32 crc kubenswrapper[5108]: I0219 00:19:32.872588 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" event={"ID":"6c2b94d2-56b6-460c-af3e-7519d4ac9b54","Type":"ContainerStarted","Data":"9c549bc1d5c9adf68ddbe6fe9e224ff0d874d0cd756bf55da10ab3a20636448b"} Feb 19 00:19:32 crc kubenswrapper[5108]: I0219 00:19:32.873500 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:32 crc kubenswrapper[5108]: I0219 00:19:32.873535 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:32 crc kubenswrapper[5108]: I0219 00:19:32.873547 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:32 crc kubenswrapper[5108]: I0219 00:19:32.903816 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:32 crc kubenswrapper[5108]: I0219 00:19:32.908345 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:19:32 crc kubenswrapper[5108]: I0219 00:19:32.909405 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" podStartSLOduration=7.909386166 podStartE2EDuration="7.909386166s" podCreationTimestamp="2026-02-19 00:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:19:32.902967201 +0000 UTC m=+631.869613509" watchObservedRunningTime="2026-02-19 00:19:32.909386166 +0000 UTC m=+631.876032474" Feb 19 00:19:36 crc kubenswrapper[5108]: I0219 00:19:36.144894 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:19:36 crc kubenswrapper[5108]: I0219 00:19:36.145711 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:19:55 crc kubenswrapper[5108]: I0219 00:19:55.879020 5108 pod_container_manager_linux.go:217] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podc556da79-b025-425f-b2cd-ac55950c66cc"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podc556da79-b025-425f-b2cd-ac55950c66cc] : Timed out while waiting for systemd to remove kubepods-burstable-podc556da79_b025_425f_b2cd_ac55950c66cc.slice" Feb 19 00:19:55 crc kubenswrapper[5108]: E0219 00:19:55.879486 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podc556da79-b025-425f-b2cd-ac55950c66cc] : unable to destroy cgroup paths for cgroup [kubepods burstable podc556da79-b025-425f-b2cd-ac55950c66cc] : Timed out while waiting for systemd to remove kubepods-burstable-podc556da79_b025_425f_b2cd_ac55950c66cc.slice" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" Feb 19 00:19:56 crc kubenswrapper[5108]: I0219 00:19:56.023202 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4" Feb 19 00:19:56 crc kubenswrapper[5108]: I0219 00:19:56.051832 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4"] Feb 19 00:19:56 crc kubenswrapper[5108]: I0219 00:19:56.059463 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bbrq4"] Feb 19 00:19:57 crc kubenswrapper[5108]: I0219 00:19:57.856974 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c556da79-b025-425f-b2cd-ac55950c66cc" path="/var/lib/kubelet/pods/c556da79-b025-425f-b2cd-ac55950c66cc/volumes" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.144439 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524340-sbl4s"] Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.167559 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524340-sbl4s"] Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.167718 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524340-sbl4s" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.170833 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.171079 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.171164 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.268368 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrph\" (UniqueName: \"kubernetes.io/projected/ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb-kube-api-access-tvrph\") pod \"auto-csr-approver-29524340-sbl4s\" (UID: \"ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb\") " pod="openshift-infra/auto-csr-approver-29524340-sbl4s" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.369416 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrph\" (UniqueName: \"kubernetes.io/projected/ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb-kube-api-access-tvrph\") pod \"auto-csr-approver-29524340-sbl4s\" (UID: \"ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb\") " pod="openshift-infra/auto-csr-approver-29524340-sbl4s" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.388668 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrph\" (UniqueName: \"kubernetes.io/projected/ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb-kube-api-access-tvrph\") pod \"auto-csr-approver-29524340-sbl4s\" (UID: \"ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb\") " pod="openshift-infra/auto-csr-approver-29524340-sbl4s" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.485650 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524340-sbl4s" Feb 19 00:20:00 crc kubenswrapper[5108]: I0219 00:20:00.875851 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524340-sbl4s"] Feb 19 00:20:01 crc kubenswrapper[5108]: I0219 00:20:01.074699 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524340-sbl4s" event={"ID":"ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb","Type":"ContainerStarted","Data":"09a00d623949c30aa23254397b1bd5057b3d61e848aec057e8e3848a3c8c68c5"} Feb 19 00:20:02 crc kubenswrapper[5108]: I0219 00:20:02.106625 5108 scope.go:117] "RemoveContainer" containerID="752970afc80aa565e5975eddc18e8fd4e82e45a68a6892317608b31e41923d00" Feb 19 00:20:02 crc kubenswrapper[5108]: I0219 00:20:02.166666 5108 scope.go:117] "RemoveContainer" containerID="d3dfa12674c2788de5f059f50ba304fb64657f395db4d24f43c3250c761f359c" Feb 19 00:20:03 crc kubenswrapper[5108]: I0219 00:20:03.088244 5108 generic.go:358] "Generic (PLEG): container finished" podID="ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb" containerID="0cf214eca2fd73c9a451b5a11faec6b3b3666a41216232d0484449395b5fafa4" exitCode=0 Feb 19 00:20:03 crc kubenswrapper[5108]: I0219 00:20:03.088318 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524340-sbl4s" event={"ID":"ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb","Type":"ContainerDied","Data":"0cf214eca2fd73c9a451b5a11faec6b3b3666a41216232d0484449395b5fafa4"} Feb 19 00:20:04 crc kubenswrapper[5108]: I0219 00:20:04.301061 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524340-sbl4s" Feb 19 00:20:04 crc kubenswrapper[5108]: I0219 00:20:04.421644 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrph\" (UniqueName: \"kubernetes.io/projected/ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb-kube-api-access-tvrph\") pod \"ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb\" (UID: \"ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb\") " Feb 19 00:20:04 crc kubenswrapper[5108]: I0219 00:20:04.431786 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb-kube-api-access-tvrph" (OuterVolumeSpecName: "kube-api-access-tvrph") pod "ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb" (UID: "ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb"). InnerVolumeSpecName "kube-api-access-tvrph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:20:04 crc kubenswrapper[5108]: I0219 00:20:04.523600 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tvrph\" (UniqueName: \"kubernetes.io/projected/ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb-kube-api-access-tvrph\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:04 crc kubenswrapper[5108]: I0219 00:20:04.909384 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4bf9f" Feb 19 00:20:05 crc kubenswrapper[5108]: I0219 00:20:05.100491 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524340-sbl4s" Feb 19 00:20:05 crc kubenswrapper[5108]: I0219 00:20:05.100525 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524340-sbl4s" event={"ID":"ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb","Type":"ContainerDied","Data":"09a00d623949c30aa23254397b1bd5057b3d61e848aec057e8e3848a3c8c68c5"} Feb 19 00:20:05 crc kubenswrapper[5108]: I0219 00:20:05.100552 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09a00d623949c30aa23254397b1bd5057b3d61e848aec057e8e3848a3c8c68c5" Feb 19 00:20:06 crc kubenswrapper[5108]: I0219 00:20:06.145405 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:20:06 crc kubenswrapper[5108]: I0219 00:20:06.145481 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:20:06 crc kubenswrapper[5108]: I0219 00:20:06.145526 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:20:06 crc kubenswrapper[5108]: I0219 00:20:06.146205 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"647ea21ed953812ffffbc73a5fd69b26af2cf7eb9e570947d57dd504f152834c"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:20:06 crc kubenswrapper[5108]: I0219 00:20:06.146275 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://647ea21ed953812ffffbc73a5fd69b26af2cf7eb9e570947d57dd504f152834c" gracePeriod=600 Feb 19 00:20:07 crc kubenswrapper[5108]: I0219 00:20:07.120413 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="647ea21ed953812ffffbc73a5fd69b26af2cf7eb9e570947d57dd504f152834c" exitCode=0 Feb 19 00:20:07 crc kubenswrapper[5108]: I0219 00:20:07.120522 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"647ea21ed953812ffffbc73a5fd69b26af2cf7eb9e570947d57dd504f152834c"} Feb 19 00:20:07 crc kubenswrapper[5108]: I0219 00:20:07.121480 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"093eaa062e1910cafbd3717e66d83cae43e8cdac075555e5e894e1a4f83c28e4"} Feb 19 00:20:07 crc kubenswrapper[5108]: I0219 00:20:07.121517 5108 scope.go:117] "RemoveContainer" containerID="fba2b7e8ff51ea182b75c4b0b3700458f1f8f0a3b312a9f4de0528c981dea8d7" Feb 19 00:20:24 crc kubenswrapper[5108]: I0219 00:20:24.945312 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwcg"] Feb 19 00:20:24 crc kubenswrapper[5108]: I0219 00:20:24.946141 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nmwcg" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerName="registry-server" containerID="cri-o://f52498423010f44e21439c5b27a29081a680d4feea685d9d9c0d90e0d52d2dfb" gracePeriod=30 Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.238765 5108 generic.go:358] "Generic (PLEG): container finished" podID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerID="f52498423010f44e21439c5b27a29081a680d4feea685d9d9c0d90e0d52d2dfb" exitCode=0 Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.238873 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwcg" event={"ID":"8dcd2a0c-4d54-41aa-b50b-881719d41cbf","Type":"ContainerDied","Data":"f52498423010f44e21439c5b27a29081a680d4feea685d9d9c0d90e0d52d2dfb"} Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.344817 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.428307 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-catalog-content\") pod \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.428375 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-utilities\") pod \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.428554 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfbzv\" (UniqueName: \"kubernetes.io/projected/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-kube-api-access-sfbzv\") pod \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\" (UID: \"8dcd2a0c-4d54-41aa-b50b-881719d41cbf\") " Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.430298 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-utilities" (OuterVolumeSpecName: "utilities") pod "8dcd2a0c-4d54-41aa-b50b-881719d41cbf" (UID: "8dcd2a0c-4d54-41aa-b50b-881719d41cbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.444653 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-kube-api-access-sfbzv" (OuterVolumeSpecName: "kube-api-access-sfbzv") pod "8dcd2a0c-4d54-41aa-b50b-881719d41cbf" (UID: "8dcd2a0c-4d54-41aa-b50b-881719d41cbf"). InnerVolumeSpecName "kube-api-access-sfbzv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.446529 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8dcd2a0c-4d54-41aa-b50b-881719d41cbf" (UID: "8dcd2a0c-4d54-41aa-b50b-881719d41cbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.530909 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.530978 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:25 crc kubenswrapper[5108]: I0219 00:20:25.530995 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sfbzv\" (UniqueName: \"kubernetes.io/projected/8dcd2a0c-4d54-41aa-b50b-881719d41cbf-kube-api-access-sfbzv\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:26 crc kubenswrapper[5108]: I0219 00:20:26.250414 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwcg" event={"ID":"8dcd2a0c-4d54-41aa-b50b-881719d41cbf","Type":"ContainerDied","Data":"de67fdf8578c6b6e24b534441bbc03991c37cd3b9c968fc12178adfdb9eea13c"} Feb 19 00:20:26 crc kubenswrapper[5108]: I0219 00:20:26.250513 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmwcg" Feb 19 00:20:26 crc kubenswrapper[5108]: I0219 00:20:26.250905 5108 scope.go:117] "RemoveContainer" containerID="f52498423010f44e21439c5b27a29081a680d4feea685d9d9c0d90e0d52d2dfb" Feb 19 00:20:26 crc kubenswrapper[5108]: I0219 00:20:26.288733 5108 scope.go:117] "RemoveContainer" containerID="23d86ad71724cd131f04da84c62354d388d09f8e5e766f9967965a67327cbaa7" Feb 19 00:20:26 crc kubenswrapper[5108]: I0219 00:20:26.293997 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwcg"] Feb 19 00:20:26 crc kubenswrapper[5108]: I0219 00:20:26.301831 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwcg"] Feb 19 00:20:26 crc kubenswrapper[5108]: I0219 00:20:26.315364 5108 scope.go:117] "RemoveContainer" containerID="a1af2bcc7efde802b233bb98b147c48c40aac06ce0c7850550d663894928de3e" Feb 19 00:20:27 crc kubenswrapper[5108]: I0219 00:20:27.853152 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" path="/var/lib/kubelet/pods/8dcd2a0c-4d54-41aa-b50b-881719d41cbf/volumes" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.573745 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56"] Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.574664 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerName="extract-utilities" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.574688 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerName="extract-utilities" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.574729 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerName="registry-server" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.574744 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerName="registry-server" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.574792 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerName="extract-content" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.574806 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerName="extract-content" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.574834 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb" containerName="oc" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.574846 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb" containerName="oc" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.575029 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8dcd2a0c-4d54-41aa-b50b-881719d41cbf" containerName="registry-server" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.575065 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb" containerName="oc" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.585339 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56"] Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.585505 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.588900 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.671599 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52fjm\" (UniqueName: \"kubernetes.io/projected/16b44f18-0a6f-4fc0-b923-f3bc5a596156-kube-api-access-52fjm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.671986 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.672024 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.772834 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.772903 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.773182 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52fjm\" (UniqueName: \"kubernetes.io/projected/16b44f18-0a6f-4fc0-b923-f3bc5a596156-kube-api-access-52fjm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.773300 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.773641 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.793159 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52fjm\" (UniqueName: \"kubernetes.io/projected/16b44f18-0a6f-4fc0-b923-f3bc5a596156-kube-api-access-52fjm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:28 crc kubenswrapper[5108]: I0219 00:20:28.899301 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:29 crc kubenswrapper[5108]: I0219 00:20:29.127084 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56"] Feb 19 00:20:29 crc kubenswrapper[5108]: I0219 00:20:29.271680 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" event={"ID":"16b44f18-0a6f-4fc0-b923-f3bc5a596156","Type":"ContainerStarted","Data":"89740ae8dbf67645d2b75f7a3ea2ecc47d9caec61d190af00d2de10919001198"} Feb 19 00:20:29 crc kubenswrapper[5108]: I0219 00:20:29.271736 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" event={"ID":"16b44f18-0a6f-4fc0-b923-f3bc5a596156","Type":"ContainerStarted","Data":"3becb137a98b420d8c48c8088336b4ce993de12121eba698ea171b5c486926c6"} Feb 19 00:20:30 crc kubenswrapper[5108]: I0219 00:20:30.280688 5108 generic.go:358] "Generic (PLEG): container finished" podID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerID="89740ae8dbf67645d2b75f7a3ea2ecc47d9caec61d190af00d2de10919001198" exitCode=0 Feb 19 00:20:30 crc kubenswrapper[5108]: I0219 00:20:30.280761 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" event={"ID":"16b44f18-0a6f-4fc0-b923-f3bc5a596156","Type":"ContainerDied","Data":"89740ae8dbf67645d2b75f7a3ea2ecc47d9caec61d190af00d2de10919001198"} Feb 19 00:20:32 crc kubenswrapper[5108]: I0219 00:20:32.295165 5108 generic.go:358] "Generic (PLEG): container finished" podID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerID="dd9e4188b31b38f0e4624aef8fdbe85f9c2544321c3ff9d784b9207c96a62d67" exitCode=0 Feb 19 00:20:32 crc kubenswrapper[5108]: I0219 00:20:32.296255 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" event={"ID":"16b44f18-0a6f-4fc0-b923-f3bc5a596156","Type":"ContainerDied","Data":"dd9e4188b31b38f0e4624aef8fdbe85f9c2544321c3ff9d784b9207c96a62d67"} Feb 19 00:20:33 crc kubenswrapper[5108]: I0219 00:20:33.308258 5108 generic.go:358] "Generic (PLEG): container finished" podID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerID="a542064292142e5ef302e6ad1f249afa9b04b383f7f68daefaec3c241bd86c4d" exitCode=0 Feb 19 00:20:33 crc kubenswrapper[5108]: I0219 00:20:33.308360 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" event={"ID":"16b44f18-0a6f-4fc0-b923-f3bc5a596156","Type":"ContainerDied","Data":"a542064292142e5ef302e6ad1f249afa9b04b383f7f68daefaec3c241bd86c4d"} Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.531952 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.656194 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52fjm\" (UniqueName: \"kubernetes.io/projected/16b44f18-0a6f-4fc0-b923-f3bc5a596156-kube-api-access-52fjm\") pod \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.656261 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-bundle\") pod \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.656318 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-util\") pod \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\" (UID: \"16b44f18-0a6f-4fc0-b923-f3bc5a596156\") " Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.659515 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-bundle" (OuterVolumeSpecName: "bundle") pod "16b44f18-0a6f-4fc0-b923-f3bc5a596156" (UID: "16b44f18-0a6f-4fc0-b923-f3bc5a596156"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.663427 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.665610 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b44f18-0a6f-4fc0-b923-f3bc5a596156-kube-api-access-52fjm" (OuterVolumeSpecName: "kube-api-access-52fjm") pod "16b44f18-0a6f-4fc0-b923-f3bc5a596156" (UID: "16b44f18-0a6f-4fc0-b923-f3bc5a596156"). InnerVolumeSpecName "kube-api-access-52fjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.675560 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-util" (OuterVolumeSpecName: "util") pod "16b44f18-0a6f-4fc0-b923-f3bc5a596156" (UID: "16b44f18-0a6f-4fc0-b923-f3bc5a596156"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.770135 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-52fjm\" (UniqueName: \"kubernetes.io/projected/16b44f18-0a6f-4fc0-b923-f3bc5a596156-kube-api-access-52fjm\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:34 crc kubenswrapper[5108]: I0219 00:20:34.770424 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b44f18-0a6f-4fc0-b923-f3bc5a596156-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.326855 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" event={"ID":"16b44f18-0a6f-4fc0-b923-f3bc5a596156","Type":"ContainerDied","Data":"3becb137a98b420d8c48c8088336b4ce993de12121eba698ea171b5c486926c6"} Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.326927 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3becb137a98b420d8c48c8088336b4ce993de12121eba698ea171b5c486926c6" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.326883 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.977818 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5"] Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.978709 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerName="pull" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.978733 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerName="pull" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.978784 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerName="extract" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.978793 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerName="extract" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.978808 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerName="util" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.978819 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerName="util" Feb 19 00:20:35 crc kubenswrapper[5108]: I0219 00:20:35.979038 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="16b44f18-0a6f-4fc0-b923-f3bc5a596156" containerName="extract" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.036644 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5"] Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.036812 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.040076 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.137607 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/0b613a11-75dd-4743-b254-1c46655902a5-kube-api-access-wrjgk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.137667 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.137750 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.239311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/0b613a11-75dd-4743-b254-1c46655902a5-kube-api-access-wrjgk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.239427 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.239560 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.240558 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.240566 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.271809 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/0b613a11-75dd-4743-b254-1c46655902a5-kube-api-access-wrjgk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.356428 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.618013 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5"] Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.776793 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg"] Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.781624 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.794604 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg"] Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.844668 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4hn9\" (UniqueName: \"kubernetes.io/projected/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-kube-api-access-j4hn9\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.844813 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.844839 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.945808 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.945854 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.945886 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j4hn9\" (UniqueName: \"kubernetes.io/projected/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-kube-api-access-j4hn9\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.946491 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.946504 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:36 crc kubenswrapper[5108]: I0219 00:20:36.975064 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4hn9\" (UniqueName: \"kubernetes.io/projected/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-kube-api-access-j4hn9\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:37 crc kubenswrapper[5108]: I0219 00:20:37.103035 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:37 crc kubenswrapper[5108]: I0219 00:20:37.308942 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg"] Feb 19 00:20:37 crc kubenswrapper[5108]: W0219 00:20:37.315185 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11b0ad91_9b7a_4520_8abb_9ca84c22c5cb.slice/crio-360bc233d1560fd9ddd43e576d6c1c76096051adf1b3835a43969d89014aa28d WatchSource:0}: Error finding container 360bc233d1560fd9ddd43e576d6c1c76096051adf1b3835a43969d89014aa28d: Status 404 returned error can't find the container with id 360bc233d1560fd9ddd43e576d6c1c76096051adf1b3835a43969d89014aa28d Feb 19 00:20:37 crc kubenswrapper[5108]: I0219 00:20:37.338113 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" event={"ID":"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb","Type":"ContainerStarted","Data":"360bc233d1560fd9ddd43e576d6c1c76096051adf1b3835a43969d89014aa28d"} Feb 19 00:20:37 crc kubenswrapper[5108]: I0219 00:20:37.339988 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b613a11-75dd-4743-b254-1c46655902a5" containerID="b7218175f6110fb0a0c93555a25ea992f3ebd712424a54952fa9ed9a0d9ff7c6" exitCode=0 Feb 19 00:20:37 crc kubenswrapper[5108]: I0219 00:20:37.340054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" event={"ID":"0b613a11-75dd-4743-b254-1c46655902a5","Type":"ContainerDied","Data":"b7218175f6110fb0a0c93555a25ea992f3ebd712424a54952fa9ed9a0d9ff7c6"} Feb 19 00:20:37 crc kubenswrapper[5108]: I0219 00:20:37.340118 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" event={"ID":"0b613a11-75dd-4743-b254-1c46655902a5","Type":"ContainerStarted","Data":"de435af921a031c58fc3ab7ac3706318b313bd8921990a3d6f1593e099f36fa5"} Feb 19 00:20:38 crc kubenswrapper[5108]: I0219 00:20:38.348923 5108 generic.go:358] "Generic (PLEG): container finished" podID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerID="713a7237fc51f907e233cf0466f39f2f736eccf3d624e4f59f426e20b53f3c47" exitCode=0 Feb 19 00:20:38 crc kubenswrapper[5108]: I0219 00:20:38.349029 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" event={"ID":"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb","Type":"ContainerDied","Data":"713a7237fc51f907e233cf0466f39f2f736eccf3d624e4f59f426e20b53f3c47"} Feb 19 00:20:39 crc kubenswrapper[5108]: I0219 00:20:39.359150 5108 generic.go:358] "Generic (PLEG): container finished" podID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerID="023f2655fa59885becb25d805f161f7aa29ba906d46471ffe06e5f74b609b070" exitCode=0 Feb 19 00:20:39 crc kubenswrapper[5108]: I0219 00:20:39.359294 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" event={"ID":"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb","Type":"ContainerDied","Data":"023f2655fa59885becb25d805f161f7aa29ba906d46471ffe06e5f74b609b070"} Feb 19 00:20:39 crc kubenswrapper[5108]: I0219 00:20:39.361663 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b613a11-75dd-4743-b254-1c46655902a5" containerID="5f391db25e341db14176a27a25e0048ebbb4601952e3c2aeba16cc233c62b701" exitCode=0 Feb 19 00:20:39 crc kubenswrapper[5108]: I0219 00:20:39.361796 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" event={"ID":"0b613a11-75dd-4743-b254-1c46655902a5","Type":"ContainerDied","Data":"5f391db25e341db14176a27a25e0048ebbb4601952e3c2aeba16cc233c62b701"} Feb 19 00:20:40 crc kubenswrapper[5108]: I0219 00:20:40.369082 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b613a11-75dd-4743-b254-1c46655902a5" containerID="05009a5bda7def8cea68f96272c9201ecf8ea017982727479e34e778075ef4e3" exitCode=0 Feb 19 00:20:40 crc kubenswrapper[5108]: I0219 00:20:40.369158 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" event={"ID":"0b613a11-75dd-4743-b254-1c46655902a5","Type":"ContainerDied","Data":"05009a5bda7def8cea68f96272c9201ecf8ea017982727479e34e778075ef4e3"} Feb 19 00:20:40 crc kubenswrapper[5108]: I0219 00:20:40.372058 5108 generic.go:358] "Generic (PLEG): container finished" podID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerID="fe233f48e2b0889bc7dfcf98548d498315895be2901149d4957b7c1b7e91427c" exitCode=0 Feb 19 00:20:40 crc kubenswrapper[5108]: I0219 00:20:40.372145 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" event={"ID":"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb","Type":"ContainerDied","Data":"fe233f48e2b0889bc7dfcf98548d498315895be2901149d4957b7c1b7e91427c"} Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.685408 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.713332 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4hn9\" (UniqueName: \"kubernetes.io/projected/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-kube-api-access-j4hn9\") pod \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.713392 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-util\") pod \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.713543 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-bundle\") pod \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\" (UID: \"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb\") " Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.715877 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-bundle" (OuterVolumeSpecName: "bundle") pod "11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" (UID: "11b0ad91-9b7a-4520-8abb-9ca84c22c5cb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.743849 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-util" (OuterVolumeSpecName: "util") pod "11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" (UID: "11b0ad91-9b7a-4520-8abb-9ca84c22c5cb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.747099 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-kube-api-access-j4hn9" (OuterVolumeSpecName: "kube-api-access-j4hn9") pod "11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" (UID: "11b0ad91-9b7a-4520-8abb-9ca84c22c5cb"). InnerVolumeSpecName "kube-api-access-j4hn9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.805488 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.814536 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j4hn9\" (UniqueName: \"kubernetes.io/projected/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-kube-api-access-j4hn9\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.814582 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.814592 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b0ad91-9b7a-4520-8abb-9ca84c22c5cb-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.915366 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-bundle\") pod \"0b613a11-75dd-4743-b254-1c46655902a5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.915690 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/0b613a11-75dd-4743-b254-1c46655902a5-kube-api-access-wrjgk\") pod \"0b613a11-75dd-4743-b254-1c46655902a5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.915772 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-util\") pod \"0b613a11-75dd-4743-b254-1c46655902a5\" (UID: \"0b613a11-75dd-4743-b254-1c46655902a5\") " Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.915842 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-bundle" (OuterVolumeSpecName: "bundle") pod "0b613a11-75dd-4743-b254-1c46655902a5" (UID: "0b613a11-75dd-4743-b254-1c46655902a5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.916129 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.918842 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b613a11-75dd-4743-b254-1c46655902a5-kube-api-access-wrjgk" (OuterVolumeSpecName: "kube-api-access-wrjgk") pod "0b613a11-75dd-4743-b254-1c46655902a5" (UID: "0b613a11-75dd-4743-b254-1c46655902a5"). InnerVolumeSpecName "kube-api-access-wrjgk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:20:41 crc kubenswrapper[5108]: I0219 00:20:41.933026 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-util" (OuterVolumeSpecName: "util") pod "0b613a11-75dd-4743-b254-1c46655902a5" (UID: "0b613a11-75dd-4743-b254-1c46655902a5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:42 crc kubenswrapper[5108]: I0219 00:20:42.017162 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wrjgk\" (UniqueName: \"kubernetes.io/projected/0b613a11-75dd-4743-b254-1c46655902a5-kube-api-access-wrjgk\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:42 crc kubenswrapper[5108]: I0219 00:20:42.017207 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b613a11-75dd-4743-b254-1c46655902a5-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:42 crc kubenswrapper[5108]: I0219 00:20:42.383831 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" Feb 19 00:20:42 crc kubenswrapper[5108]: I0219 00:20:42.383844 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg" event={"ID":"11b0ad91-9b7a-4520-8abb-9ca84c22c5cb","Type":"ContainerDied","Data":"360bc233d1560fd9ddd43e576d6c1c76096051adf1b3835a43969d89014aa28d"} Feb 19 00:20:42 crc kubenswrapper[5108]: I0219 00:20:42.383883 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="360bc233d1560fd9ddd43e576d6c1c76096051adf1b3835a43969d89014aa28d" Feb 19 00:20:42 crc kubenswrapper[5108]: I0219 00:20:42.386306 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" event={"ID":"0b613a11-75dd-4743-b254-1c46655902a5","Type":"ContainerDied","Data":"de435af921a031c58fc3ab7ac3706318b313bd8921990a3d6f1593e099f36fa5"} Feb 19 00:20:42 crc kubenswrapper[5108]: I0219 00:20:42.386338 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de435af921a031c58fc3ab7ac3706318b313bd8921990a3d6f1593e099f36fa5" Feb 19 00:20:42 crc kubenswrapper[5108]: I0219 00:20:42.386451 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385126 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x"] Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385667 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerName="util" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385683 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerName="util" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385701 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerName="pull" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385707 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerName="pull" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385717 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b613a11-75dd-4743-b254-1c46655902a5" containerName="extract" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385724 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b613a11-75dd-4743-b254-1c46655902a5" containerName="extract" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385740 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b613a11-75dd-4743-b254-1c46655902a5" containerName="pull" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385747 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b613a11-75dd-4743-b254-1c46655902a5" containerName="pull" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385763 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b613a11-75dd-4743-b254-1c46655902a5" containerName="util" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385771 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b613a11-75dd-4743-b254-1c46655902a5" containerName="util" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385779 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerName="extract" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385787 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerName="extract" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385891 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b613a11-75dd-4743-b254-1c46655902a5" containerName="extract" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.385901 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="11b0ad91-9b7a-4520-8abb-9ca84c22c5cb" containerName="extract" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.394399 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.398744 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.430542 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x"] Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.442216 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.442329 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2lw7\" (UniqueName: \"kubernetes.io/projected/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-kube-api-access-z2lw7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.442366 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.543703 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2lw7\" (UniqueName: \"kubernetes.io/projected/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-kube-api-access-z2lw7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.543749 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.543961 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.544260 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.544501 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.565857 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2lw7\" (UniqueName: \"kubernetes.io/projected/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-kube-api-access-z2lw7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.713764 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:44 crc kubenswrapper[5108]: I0219 00:20:44.963183 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x"] Feb 19 00:20:45 crc kubenswrapper[5108]: I0219 00:20:45.406742 5108 generic.go:358] "Generic (PLEG): container finished" podID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerID="23c95123ee1a7281c780c0aa57fe948a9de7ac395ef17885ba922a18fcadd56c" exitCode=0 Feb 19 00:20:45 crc kubenswrapper[5108]: I0219 00:20:45.406800 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" event={"ID":"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9","Type":"ContainerDied","Data":"23c95123ee1a7281c780c0aa57fe948a9de7ac395ef17885ba922a18fcadd56c"} Feb 19 00:20:45 crc kubenswrapper[5108]: I0219 00:20:45.407065 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" event={"ID":"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9","Type":"ContainerStarted","Data":"4172892e8d9c8e12abbbe3d353fc8be2bb48fd668525a3fbf7260f4f4472ba25"} Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.618202 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv"] Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.666484 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv"] Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.666620 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.668590 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-dsj2l\"" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.669553 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.669611 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.729484 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh"] Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.733627 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.736208 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.736321 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-fbzt8\"" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.753649 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr"] Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.758359 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.759415 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh"] Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.775295 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr"] Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.802515 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/266935b2-7e3e-4471-ab13-97b596e98f12-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh\" (UID: \"266935b2-7e3e-4471-ab13-97b596e98f12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.802570 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmrz\" (UniqueName: \"kubernetes.io/projected/387cf543-9cc1-4861-b4ce-68abdc01d808-kube-api-access-8xmrz\") pod \"obo-prometheus-operator-9bc85b4bf-hx8sv\" (UID: \"387cf543-9cc1-4861-b4ce-68abdc01d808\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.802603 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/266935b2-7e3e-4471-ab13-97b596e98f12-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh\" (UID: \"266935b2-7e3e-4471-ab13-97b596e98f12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.903607 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/266935b2-7e3e-4471-ab13-97b596e98f12-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh\" (UID: \"266935b2-7e3e-4471-ab13-97b596e98f12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.903681 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xmrz\" (UniqueName: \"kubernetes.io/projected/387cf543-9cc1-4861-b4ce-68abdc01d808-kube-api-access-8xmrz\") pod \"obo-prometheus-operator-9bc85b4bf-hx8sv\" (UID: \"387cf543-9cc1-4861-b4ce-68abdc01d808\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.903712 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0315a649-f003-4488-a10e-025063b858af-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr\" (UID: \"0315a649-f003-4488-a10e-025063b858af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.903793 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/266935b2-7e3e-4471-ab13-97b596e98f12-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh\" (UID: \"266935b2-7e3e-4471-ab13-97b596e98f12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.903823 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0315a649-f003-4488-a10e-025063b858af-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr\" (UID: \"0315a649-f003-4488-a10e-025063b858af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.915069 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/266935b2-7e3e-4471-ab13-97b596e98f12-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh\" (UID: \"266935b2-7e3e-4471-ab13-97b596e98f12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.924541 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-dxnlv"] Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.926354 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/266935b2-7e3e-4471-ab13-97b596e98f12-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh\" (UID: \"266935b2-7e3e-4471-ab13-97b596e98f12\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.930192 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.932258 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xmrz\" (UniqueName: \"kubernetes.io/projected/387cf543-9cc1-4861-b4ce-68abdc01d808-kube-api-access-8xmrz\") pod \"obo-prometheus-operator-9bc85b4bf-hx8sv\" (UID: \"387cf543-9cc1-4861-b4ce-68abdc01d808\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.932640 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-pn5r8\"" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.933513 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.941495 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-dxnlv"] Feb 19 00:20:48 crc kubenswrapper[5108]: I0219 00:20:48.981854 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.004560 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jgw\" (UniqueName: \"kubernetes.io/projected/0ab73ba4-63c1-423b-9bc7-ecdec5a770b1-kube-api-access-c7jgw\") pod \"observability-operator-85c68dddb-dxnlv\" (UID: \"0ab73ba4-63c1-423b-9bc7-ecdec5a770b1\") " pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.004845 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ab73ba4-63c1-423b-9bc7-ecdec5a770b1-observability-operator-tls\") pod \"observability-operator-85c68dddb-dxnlv\" (UID: \"0ab73ba4-63c1-423b-9bc7-ecdec5a770b1\") " pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.005054 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0315a649-f003-4488-a10e-025063b858af-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr\" (UID: \"0315a649-f003-4488-a10e-025063b858af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.005158 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0315a649-f003-4488-a10e-025063b858af-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr\" (UID: \"0315a649-f003-4488-a10e-025063b858af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.026692 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0315a649-f003-4488-a10e-025063b858af-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr\" (UID: \"0315a649-f003-4488-a10e-025063b858af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.027386 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0315a649-f003-4488-a10e-025063b858af-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr\" (UID: \"0315a649-f003-4488-a10e-025063b858af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.030579 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bft9p"] Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.048380 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.059471 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bft9p"] Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.059645 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.065275 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-bl4v4\"" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.075531 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.106840 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7jgw\" (UniqueName: \"kubernetes.io/projected/0ab73ba4-63c1-423b-9bc7-ecdec5a770b1-kube-api-access-c7jgw\") pod \"observability-operator-85c68dddb-dxnlv\" (UID: \"0ab73ba4-63c1-423b-9bc7-ecdec5a770b1\") " pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.106882 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj2xc\" (UniqueName: \"kubernetes.io/projected/41c947a0-c927-4923-a233-a42d1a8b1039-kube-api-access-nj2xc\") pod \"perses-operator-669c9f96b5-bft9p\" (UID: \"41c947a0-c927-4923-a233-a42d1a8b1039\") " pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.106965 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ab73ba4-63c1-423b-9bc7-ecdec5a770b1-observability-operator-tls\") pod \"observability-operator-85c68dddb-dxnlv\" (UID: \"0ab73ba4-63c1-423b-9bc7-ecdec5a770b1\") " pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.106987 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/41c947a0-c927-4923-a233-a42d1a8b1039-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bft9p\" (UID: \"41c947a0-c927-4923-a233-a42d1a8b1039\") " pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.114627 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ab73ba4-63c1-423b-9bc7-ecdec5a770b1-observability-operator-tls\") pod \"observability-operator-85c68dddb-dxnlv\" (UID: \"0ab73ba4-63c1-423b-9bc7-ecdec5a770b1\") " pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.126651 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7jgw\" (UniqueName: \"kubernetes.io/projected/0ab73ba4-63c1-423b-9bc7-ecdec5a770b1-kube-api-access-c7jgw\") pod \"observability-operator-85c68dddb-dxnlv\" (UID: \"0ab73ba4-63c1-423b-9bc7-ecdec5a770b1\") " pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.215188 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/41c947a0-c927-4923-a233-a42d1a8b1039-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bft9p\" (UID: \"41c947a0-c927-4923-a233-a42d1a8b1039\") " pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.215322 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nj2xc\" (UniqueName: \"kubernetes.io/projected/41c947a0-c927-4923-a233-a42d1a8b1039-kube-api-access-nj2xc\") pod \"perses-operator-669c9f96b5-bft9p\" (UID: \"41c947a0-c927-4923-a233-a42d1a8b1039\") " pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.216875 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/41c947a0-c927-4923-a233-a42d1a8b1039-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bft9p\" (UID: \"41c947a0-c927-4923-a233-a42d1a8b1039\") " pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.245657 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj2xc\" (UniqueName: \"kubernetes.io/projected/41c947a0-c927-4923-a233-a42d1a8b1039-kube-api-access-nj2xc\") pod \"perses-operator-669c9f96b5-bft9p\" (UID: \"41c947a0-c927-4923-a233-a42d1a8b1039\") " pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.283716 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.387318 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.493095 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr"] Feb 19 00:20:49 crc kubenswrapper[5108]: W0219 00:20:49.536463 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod266935b2_7e3e_4471_ab13_97b596e98f12.slice/crio-ea3f0bb19085896b085131c1b99d3f985df2831fe7901622944d7954276ad4a8 WatchSource:0}: Error finding container ea3f0bb19085896b085131c1b99d3f985df2831fe7901622944d7954276ad4a8: Status 404 returned error can't find the container with id ea3f0bb19085896b085131c1b99d3f985df2831fe7901622944d7954276ad4a8 Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.559496 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh"] Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.614424 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv"] Feb 19 00:20:49 crc kubenswrapper[5108]: W0219 00:20:49.673435 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod387cf543_9cc1_4861_b4ce_68abdc01d808.slice/crio-5a33b0d6e55c91865e15dbf0ea02a728d684d68fc1bc92969aa00e8db73837cb WatchSource:0}: Error finding container 5a33b0d6e55c91865e15dbf0ea02a728d684d68fc1bc92969aa00e8db73837cb: Status 404 returned error can't find the container with id 5a33b0d6e55c91865e15dbf0ea02a728d684d68fc1bc92969aa00e8db73837cb Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.856194 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-dxnlv"] Feb 19 00:20:49 crc kubenswrapper[5108]: W0219 00:20:49.861686 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab73ba4_63c1_423b_9bc7_ecdec5a770b1.slice/crio-b746c17fac1cabbf038ce003c6aa3432473728271935ef497c107808d613a311 WatchSource:0}: Error finding container b746c17fac1cabbf038ce003c6aa3432473728271935ef497c107808d613a311: Status 404 returned error can't find the container with id b746c17fac1cabbf038ce003c6aa3432473728271935ef497c107808d613a311 Feb 19 00:20:49 crc kubenswrapper[5108]: I0219 00:20:49.945876 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bft9p"] Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.071572 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-57f4f7d6d4-b2nct"] Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.077040 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.082328 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.082418 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.082633 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.082675 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-z7sxf\"" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.086246 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-57f4f7d6d4-b2nct"] Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.230124 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c75260fd-6720-448e-8926-82f29d2eec16-apiservice-cert\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.230180 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c75260fd-6720-448e-8926-82f29d2eec16-webhook-cert\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.230252 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm4mt\" (UniqueName: \"kubernetes.io/projected/c75260fd-6720-448e-8926-82f29d2eec16-kube-api-access-lm4mt\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.331848 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4mt\" (UniqueName: \"kubernetes.io/projected/c75260fd-6720-448e-8926-82f29d2eec16-kube-api-access-lm4mt\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.331962 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c75260fd-6720-448e-8926-82f29d2eec16-apiservice-cert\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.331990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c75260fd-6720-448e-8926-82f29d2eec16-webhook-cert\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.338300 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c75260fd-6720-448e-8926-82f29d2eec16-apiservice-cert\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.343896 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c75260fd-6720-448e-8926-82f29d2eec16-webhook-cert\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.362959 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm4mt\" (UniqueName: \"kubernetes.io/projected/c75260fd-6720-448e-8926-82f29d2eec16-kube-api-access-lm4mt\") pod \"elastic-operator-57f4f7d6d4-b2nct\" (UID: \"c75260fd-6720-448e-8926-82f29d2eec16\") " pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.396128 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.481496 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" event={"ID":"266935b2-7e3e-4471-ab13-97b596e98f12","Type":"ContainerStarted","Data":"ea3f0bb19085896b085131c1b99d3f985df2831fe7901622944d7954276ad4a8"} Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.483115 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-bft9p" event={"ID":"41c947a0-c927-4923-a233-a42d1a8b1039","Type":"ContainerStarted","Data":"3c7cebffd1f1a5f84231c3c15ae3c40a7103c949dfd698791ff01404e99079b8"} Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.487760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv" event={"ID":"387cf543-9cc1-4861-b4ce-68abdc01d808","Type":"ContainerStarted","Data":"5a33b0d6e55c91865e15dbf0ea02a728d684d68fc1bc92969aa00e8db73837cb"} Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.496555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" event={"ID":"0315a649-f003-4488-a10e-025063b858af","Type":"ContainerStarted","Data":"a3ca1323b9295be9cf6dbc9b9f183dcd046457288d2d5166b13f5a0e29259962"} Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.500364 5108 generic.go:358] "Generic (PLEG): container finished" podID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerID="0625db10949ff73e8386c423d823a26cc62797b1fc30f81cdd814daf4cbbc2ab" exitCode=0 Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.500475 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" event={"ID":"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9","Type":"ContainerDied","Data":"0625db10949ff73e8386c423d823a26cc62797b1fc30f81cdd814daf4cbbc2ab"} Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.502434 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-dxnlv" event={"ID":"0ab73ba4-63c1-423b-9bc7-ecdec5a770b1","Type":"ContainerStarted","Data":"b746c17fac1cabbf038ce003c6aa3432473728271935ef497c107808d613a311"} Feb 19 00:20:50 crc kubenswrapper[5108]: I0219 00:20:50.673121 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-57f4f7d6d4-b2nct"] Feb 19 00:20:50 crc kubenswrapper[5108]: W0219 00:20:50.690373 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc75260fd_6720_448e_8926_82f29d2eec16.slice/crio-822a22a75ad6a92b2a91126527ca811e513ade5e16d39b117dc76146f9d3996f WatchSource:0}: Error finding container 822a22a75ad6a92b2a91126527ca811e513ade5e16d39b117dc76146f9d3996f: Status 404 returned error can't find the container with id 822a22a75ad6a92b2a91126527ca811e513ade5e16d39b117dc76146f9d3996f Feb 19 00:20:51 crc kubenswrapper[5108]: I0219 00:20:51.521412 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" event={"ID":"c75260fd-6720-448e-8926-82f29d2eec16","Type":"ContainerStarted","Data":"822a22a75ad6a92b2a91126527ca811e513ade5e16d39b117dc76146f9d3996f"} Feb 19 00:20:51 crc kubenswrapper[5108]: I0219 00:20:51.527074 5108 generic.go:358] "Generic (PLEG): container finished" podID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerID="2177f61e3b35a501d456397808ac9c799c17f680fe91abbf9c2fa628c5640fca" exitCode=0 Feb 19 00:20:51 crc kubenswrapper[5108]: I0219 00:20:51.527157 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" event={"ID":"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9","Type":"ContainerDied","Data":"2177f61e3b35a501d456397808ac9c799c17f680fe91abbf9c2fa628c5640fca"} Feb 19 00:20:52 crc kubenswrapper[5108]: I0219 00:20:52.710649 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bjbht"] Feb 19 00:20:52 crc kubenswrapper[5108]: I0219 00:20:52.718439 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bjbht"] Feb 19 00:20:52 crc kubenswrapper[5108]: I0219 00:20:52.718574 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-bjbht" Feb 19 00:20:52 crc kubenswrapper[5108]: I0219 00:20:52.723284 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-2fxfs\"" Feb 19 00:20:52 crc kubenswrapper[5108]: I0219 00:20:52.864359 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjv6\" (UniqueName: \"kubernetes.io/projected/23a23f44-8f3a-484c-93ad-443f99c02474-kube-api-access-hqjv6\") pod \"interconnect-operator-78b9bd8798-bjbht\" (UID: \"23a23f44-8f3a-484c-93ad-443f99c02474\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bjbht" Feb 19 00:20:52 crc kubenswrapper[5108]: I0219 00:20:52.945480 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:52 crc kubenswrapper[5108]: I0219 00:20:52.965795 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjv6\" (UniqueName: \"kubernetes.io/projected/23a23f44-8f3a-484c-93ad-443f99c02474-kube-api-access-hqjv6\") pod \"interconnect-operator-78b9bd8798-bjbht\" (UID: \"23a23f44-8f3a-484c-93ad-443f99c02474\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bjbht" Feb 19 00:20:52 crc kubenswrapper[5108]: I0219 00:20:52.995524 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjv6\" (UniqueName: \"kubernetes.io/projected/23a23f44-8f3a-484c-93ad-443f99c02474-kube-api-access-hqjv6\") pod \"interconnect-operator-78b9bd8798-bjbht\" (UID: \"23a23f44-8f3a-484c-93ad-443f99c02474\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bjbht" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.045050 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-bjbht" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.066627 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-util\") pod \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.075149 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-bundle\") pod \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.078503 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2lw7\" (UniqueName: \"kubernetes.io/projected/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-kube-api-access-z2lw7\") pod \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\" (UID: \"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9\") " Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.075085 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-util" (OuterVolumeSpecName: "util") pod "e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" (UID: "e8d4c5ea-879f-4722-bc3f-d57e6fc208e9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.077328 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-bundle" (OuterVolumeSpecName: "bundle") pod "e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" (UID: "e8d4c5ea-879f-4722-bc3f-d57e6fc208e9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.088079 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-kube-api-access-z2lw7" (OuterVolumeSpecName: "kube-api-access-z2lw7") pod "e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" (UID: "e8d4c5ea-879f-4722-bc3f-d57e6fc208e9"). InnerVolumeSpecName "kube-api-access-z2lw7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.180675 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.180712 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.180721 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z2lw7\" (UniqueName: \"kubernetes.io/projected/e8d4c5ea-879f-4722-bc3f-d57e6fc208e9-kube-api-access-z2lw7\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.554406 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" event={"ID":"e8d4c5ea-879f-4722-bc3f-d57e6fc208e9","Type":"ContainerDied","Data":"4172892e8d9c8e12abbbe3d353fc8be2bb48fd668525a3fbf7260f4f4472ba25"} Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.554441 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4172892e8d9c8e12abbbe3d353fc8be2bb48fd668525a3fbf7260f4f4472ba25" Feb 19 00:20:53 crc kubenswrapper[5108]: I0219 00:20:53.554526 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x" Feb 19 00:20:54 crc kubenswrapper[5108]: I0219 00:20:54.418584 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bjbht"] Feb 19 00:20:56 crc kubenswrapper[5108]: I0219 00:20:56.575379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-bjbht" event={"ID":"23a23f44-8f3a-484c-93ad-443f99c02474","Type":"ContainerStarted","Data":"38c157e94230773ebae21f01838fd74c464146761fe4aaeb14fb3b5f2d6acc9c"} Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.245317 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9"] Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.246349 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerName="util" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.246363 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerName="util" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.246375 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerName="extract" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.246381 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerName="extract" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.246393 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerName="pull" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.246399 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerName="pull" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.246497 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e8d4c5ea-879f-4722-bc3f-d57e6fc208e9" containerName="extract" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.348243 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9"] Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.348396 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.350639 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.351504 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-7s624\"" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.351851 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.393820 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d27162bf-7bb7-4d12-acbd-237b2c10d5cf-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-c6nq9\" (UID: \"d27162bf-7bb7-4d12-acbd-237b2c10d5cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.393879 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kc5\" (UniqueName: \"kubernetes.io/projected/d27162bf-7bb7-4d12-acbd-237b2c10d5cf-kube-api-access-j8kc5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-c6nq9\" (UID: \"d27162bf-7bb7-4d12-acbd-237b2c10d5cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.497352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d27162bf-7bb7-4d12-acbd-237b2c10d5cf-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-c6nq9\" (UID: \"d27162bf-7bb7-4d12-acbd-237b2c10d5cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.497444 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j8kc5\" (UniqueName: \"kubernetes.io/projected/d27162bf-7bb7-4d12-acbd-237b2c10d5cf-kube-api-access-j8kc5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-c6nq9\" (UID: \"d27162bf-7bb7-4d12-acbd-237b2c10d5cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.497972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d27162bf-7bb7-4d12-acbd-237b2c10d5cf-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-c6nq9\" (UID: \"d27162bf-7bb7-4d12-acbd-237b2c10d5cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.543043 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8kc5\" (UniqueName: \"kubernetes.io/projected/d27162bf-7bb7-4d12-acbd-237b2c10d5cf-kube-api-access-j8kc5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-c6nq9\" (UID: \"d27162bf-7bb7-4d12-acbd-237b2c10d5cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" Feb 19 00:21:08 crc kubenswrapper[5108]: I0219 00:21:08.667315 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.477108 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9"] Feb 19 00:21:10 crc kubenswrapper[5108]: W0219 00:21:10.486439 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd27162bf_7bb7_4d12_acbd_237b2c10d5cf.slice/crio-8d993d08fc463c465a840add654a83854623d6c5fae15a5dbf26bb65f47c7a65 WatchSource:0}: Error finding container 8d993d08fc463c465a840add654a83854623d6c5fae15a5dbf26bb65f47c7a65: Status 404 returned error can't find the container with id 8d993d08fc463c465a840add654a83854623d6c5fae15a5dbf26bb65f47c7a65 Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.673916 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" event={"ID":"0315a649-f003-4488-a10e-025063b858af","Type":"ContainerStarted","Data":"73ce162ac265c2a50389b1517840bc55f689de0e06e30fe0aa25a009749f9048"} Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.675997 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-bjbht" event={"ID":"23a23f44-8f3a-484c-93ad-443f99c02474","Type":"ContainerStarted","Data":"3583baac6869548c0ec306a9074f77aa6a1e3d347ffed93b3682b8359527fc18"} Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.680624 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" event={"ID":"c75260fd-6720-448e-8926-82f29d2eec16","Type":"ContainerStarted","Data":"9dffc6a0dab0a10f9b880ca0a51e952d36163511762b971e517e025fe6b46fef"} Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.683428 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-dxnlv" event={"ID":"0ab73ba4-63c1-423b-9bc7-ecdec5a770b1","Type":"ContainerStarted","Data":"fc808c618b4f0a618a050323c4e0421a06366bee1c185f529250ba64a49d41eb"} Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.684289 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.687004 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-dxnlv" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.688519 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" event={"ID":"266935b2-7e3e-4471-ab13-97b596e98f12","Type":"ContainerStarted","Data":"7af54626fddeabfd1b4da9be07b900e1f97e5d93ea3b961a35e83e190ffb5623"} Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.691238 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-bft9p" event={"ID":"41c947a0-c927-4923-a233-a42d1a8b1039","Type":"ContainerStarted","Data":"ecea47dd0f6764a3c5b69f26226efdda615650954c70f487e074d3b357d09e4e"} Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.691438 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.693094 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv" event={"ID":"387cf543-9cc1-4861-b4ce-68abdc01d808","Type":"ContainerStarted","Data":"ca143ddbae341e7153bfb2967ec81d62b2aab940392d9a748460139aa991aa03"} Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.695605 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" event={"ID":"d27162bf-7bb7-4d12-acbd-237b2c10d5cf","Type":"ContainerStarted","Data":"8d993d08fc463c465a840add654a83854623d6c5fae15a5dbf26bb65f47c7a65"} Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.709952 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr" podStartSLOduration=2.838925255 podStartE2EDuration="22.709909418s" podCreationTimestamp="2026-02-19 00:20:48 +0000 UTC" firstStartedPulling="2026-02-19 00:20:49.528229642 +0000 UTC m=+708.494875940" lastFinishedPulling="2026-02-19 00:21:09.399213785 +0000 UTC m=+728.365860103" observedRunningTime="2026-02-19 00:21:10.707233924 +0000 UTC m=+729.673880232" watchObservedRunningTime="2026-02-19 00:21:10.709909418 +0000 UTC m=+729.676555726" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.760117 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-bft9p" podStartSLOduration=2.333195907 podStartE2EDuration="21.760082221s" podCreationTimestamp="2026-02-19 00:20:49 +0000 UTC" firstStartedPulling="2026-02-19 00:20:49.972319861 +0000 UTC m=+708.938966169" lastFinishedPulling="2026-02-19 00:21:09.399206155 +0000 UTC m=+728.365852483" observedRunningTime="2026-02-19 00:21:10.751173245 +0000 UTC m=+729.717819563" watchObservedRunningTime="2026-02-19 00:21:10.760082221 +0000 UTC m=+729.726728549" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.832786 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-dxnlv" podStartSLOduration=2.743684579 podStartE2EDuration="22.832764294s" podCreationTimestamp="2026-02-19 00:20:48 +0000 UTC" firstStartedPulling="2026-02-19 00:20:49.864048216 +0000 UTC m=+708.830694514" lastFinishedPulling="2026-02-19 00:21:09.953127921 +0000 UTC m=+728.919774229" observedRunningTime="2026-02-19 00:21:10.804677689 +0000 UTC m=+729.771323997" watchObservedRunningTime="2026-02-19 00:21:10.832764294 +0000 UTC m=+729.799410602" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.833025 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hx8sv" podStartSLOduration=2.561213131 podStartE2EDuration="22.833020881s" podCreationTimestamp="2026-02-19 00:20:48 +0000 UTC" firstStartedPulling="2026-02-19 00:20:49.679430309 +0000 UTC m=+708.646076617" lastFinishedPulling="2026-02-19 00:21:09.951238059 +0000 UTC m=+728.917884367" observedRunningTime="2026-02-19 00:21:10.827969042 +0000 UTC m=+729.794615350" watchObservedRunningTime="2026-02-19 00:21:10.833020881 +0000 UTC m=+729.799667189" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.847469 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-bjbht" podStartSLOduration=4.831848063 podStartE2EDuration="18.847453669s" podCreationTimestamp="2026-02-19 00:20:52 +0000 UTC" firstStartedPulling="2026-02-19 00:20:56.156613863 +0000 UTC m=+715.123260171" lastFinishedPulling="2026-02-19 00:21:10.172219469 +0000 UTC m=+729.138865777" observedRunningTime="2026-02-19 00:21:10.847217292 +0000 UTC m=+729.813863600" watchObservedRunningTime="2026-02-19 00:21:10.847453669 +0000 UTC m=+729.814099977" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.869759 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-57f4f7d6d4-b2nct" podStartSLOduration=2.174525305 podStartE2EDuration="20.869740713s" podCreationTimestamp="2026-02-19 00:20:50 +0000 UTC" firstStartedPulling="2026-02-19 00:20:50.698637009 +0000 UTC m=+709.665283317" lastFinishedPulling="2026-02-19 00:21:09.393852407 +0000 UTC m=+728.360498725" observedRunningTime="2026-02-19 00:21:10.867139861 +0000 UTC m=+729.833786169" watchObservedRunningTime="2026-02-19 00:21:10.869740713 +0000 UTC m=+729.836387021" Feb 19 00:21:10 crc kubenswrapper[5108]: I0219 00:21:10.903700 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh" podStartSLOduration=2.495344096 podStartE2EDuration="22.903661528s" podCreationTimestamp="2026-02-19 00:20:48 +0000 UTC" firstStartedPulling="2026-02-19 00:20:49.542530996 +0000 UTC m=+708.509177304" lastFinishedPulling="2026-02-19 00:21:09.950848428 +0000 UTC m=+728.917494736" observedRunningTime="2026-02-19 00:21:10.902799665 +0000 UTC m=+729.869445973" watchObservedRunningTime="2026-02-19 00:21:10.903661528 +0000 UTC m=+729.870307826" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.561152 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.566019 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.587006 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.587474 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.587617 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.588233 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.589540 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-rjpsp\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.590472 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.591464 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.593112 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.594829 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634764 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634820 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/651a531d-5946-47ac-95dc-3ad3f9f3b459-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634843 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634859 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634904 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634923 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634955 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634973 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.634992 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.635019 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.635038 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.635073 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.635105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.635123 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.635139 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.652168 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736338 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736390 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736412 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736429 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736449 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736473 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736491 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736516 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736539 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736558 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736573 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736603 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736623 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/651a531d-5946-47ac-95dc-3ad3f9f3b459-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736640 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.736656 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.742505 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.742760 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.742993 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.743665 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.745034 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.750582 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/651a531d-5946-47ac-95dc-3ad3f9f3b459-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.750634 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.751335 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/651a531d-5946-47ac-95dc-3ad3f9f3b459-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.756925 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.765899 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.768124 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.772903 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.773095 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.773304 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.773504 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/651a531d-5946-47ac-95dc-3ad3f9f3b459-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"651a531d-5946-47ac-95dc-3ad3f9f3b459\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:11 crc kubenswrapper[5108]: I0219 00:21:11.883351 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:12 crc kubenswrapper[5108]: I0219 00:21:12.228446 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:21:12 crc kubenswrapper[5108]: I0219 00:21:12.714089 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"651a531d-5946-47ac-95dc-3ad3f9f3b459","Type":"ContainerStarted","Data":"5c99694a8e424b71ff0ab1bb89ad0680b2c3de93f1e338bc25d3c6f60776ea2d"} Feb 19 00:21:18 crc kubenswrapper[5108]: I0219 00:21:18.759882 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" event={"ID":"d27162bf-7bb7-4d12-acbd-237b2c10d5cf","Type":"ContainerStarted","Data":"be38ec68b5dc696698a232e148b2db4b098895231b1ab0f8d851a1bc33ece5da"} Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.705152 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-c6nq9" podStartSLOduration=6.529747474 podStartE2EDuration="13.705132051s" podCreationTimestamp="2026-02-19 00:21:08 +0000 UTC" firstStartedPulling="2026-02-19 00:21:10.49046894 +0000 UTC m=+729.457115248" lastFinishedPulling="2026-02-19 00:21:17.665853507 +0000 UTC m=+736.632499825" observedRunningTime="2026-02-19 00:21:18.786392829 +0000 UTC m=+737.753039157" watchObservedRunningTime="2026-02-19 00:21:21.705132051 +0000 UTC m=+740.671778359" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.708486 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-k2bvb"] Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.726364 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-k2bvb"] Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.726449 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-bft9p" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.726552 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.728800 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.729105 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.731695 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-fh7ks\"" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.894891 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdf69b0b-3608-4252-9290-a0e77f5c73ca-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-k2bvb\" (UID: \"bdf69b0b-3608-4252-9290-a0e77f5c73ca\") " pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.894985 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ltl\" (UniqueName: \"kubernetes.io/projected/bdf69b0b-3608-4252-9290-a0e77f5c73ca-kube-api-access-f2ltl\") pod \"cert-manager-webhook-597b96b99b-k2bvb\" (UID: \"bdf69b0b-3608-4252-9290-a0e77f5c73ca\") " pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.996155 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdf69b0b-3608-4252-9290-a0e77f5c73ca-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-k2bvb\" (UID: \"bdf69b0b-3608-4252-9290-a0e77f5c73ca\") " pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:21 crc kubenswrapper[5108]: I0219 00:21:21.996227 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2ltl\" (UniqueName: \"kubernetes.io/projected/bdf69b0b-3608-4252-9290-a0e77f5c73ca-kube-api-access-f2ltl\") pod \"cert-manager-webhook-597b96b99b-k2bvb\" (UID: \"bdf69b0b-3608-4252-9290-a0e77f5c73ca\") " pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:22 crc kubenswrapper[5108]: I0219 00:21:22.024383 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdf69b0b-3608-4252-9290-a0e77f5c73ca-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-k2bvb\" (UID: \"bdf69b0b-3608-4252-9290-a0e77f5c73ca\") " pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:22 crc kubenswrapper[5108]: I0219 00:21:22.036798 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2ltl\" (UniqueName: \"kubernetes.io/projected/bdf69b0b-3608-4252-9290-a0e77f5c73ca-kube-api-access-f2ltl\") pod \"cert-manager-webhook-597b96b99b-k2bvb\" (UID: \"bdf69b0b-3608-4252-9290-a0e77f5c73ca\") " pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:22 crc kubenswrapper[5108]: I0219 00:21:22.064168 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:27 crc kubenswrapper[5108]: I0219 00:21:27.606736 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-lccmr"] Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.200030 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.201966 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-w8str\"" Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.204926 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-lccmr"] Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.274779 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1810224b-992d-40ff-a9ed-d20d16b843e4-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-lccmr\" (UID: \"1810224b-992d-40ff-a9ed-d20d16b843e4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.274901 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9lrt\" (UniqueName: \"kubernetes.io/projected/1810224b-992d-40ff-a9ed-d20d16b843e4-kube-api-access-l9lrt\") pod \"cert-manager-cainjector-8966b78d4-lccmr\" (UID: \"1810224b-992d-40ff-a9ed-d20d16b843e4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.376065 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1810224b-992d-40ff-a9ed-d20d16b843e4-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-lccmr\" (UID: \"1810224b-992d-40ff-a9ed-d20d16b843e4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.376137 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l9lrt\" (UniqueName: \"kubernetes.io/projected/1810224b-992d-40ff-a9ed-d20d16b843e4-kube-api-access-l9lrt\") pod \"cert-manager-cainjector-8966b78d4-lccmr\" (UID: \"1810224b-992d-40ff-a9ed-d20d16b843e4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.399029 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9lrt\" (UniqueName: \"kubernetes.io/projected/1810224b-992d-40ff-a9ed-d20d16b843e4-kube-api-access-l9lrt\") pod \"cert-manager-cainjector-8966b78d4-lccmr\" (UID: \"1810224b-992d-40ff-a9ed-d20d16b843e4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.400544 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1810224b-992d-40ff-a9ed-d20d16b843e4-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-lccmr\" (UID: \"1810224b-992d-40ff-a9ed-d20d16b843e4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" Feb 19 00:21:28 crc kubenswrapper[5108]: I0219 00:21:28.523213 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" Feb 19 00:21:32 crc kubenswrapper[5108]: I0219 00:21:32.525719 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-lccmr"] Feb 19 00:21:32 crc kubenswrapper[5108]: W0219 00:21:32.541354 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1810224b_992d_40ff_a9ed_d20d16b843e4.slice/crio-cbb43eec51d0e7f7d35dcfc4cfd1a0d37a2f4a58b109ab6679c88a1acba16cc4 WatchSource:0}: Error finding container cbb43eec51d0e7f7d35dcfc4cfd1a0d37a2f4a58b109ab6679c88a1acba16cc4: Status 404 returned error can't find the container with id cbb43eec51d0e7f7d35dcfc4cfd1a0d37a2f4a58b109ab6679c88a1acba16cc4 Feb 19 00:21:32 crc kubenswrapper[5108]: I0219 00:21:32.622977 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-k2bvb"] Feb 19 00:21:32 crc kubenswrapper[5108]: W0219 00:21:32.626841 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf69b0b_3608_4252_9290_a0e77f5c73ca.slice/crio-67fb03296890b1d3dee1c1d15b394457852f6e095f919bb8bd19ffc024b462eb WatchSource:0}: Error finding container 67fb03296890b1d3dee1c1d15b394457852f6e095f919bb8bd19ffc024b462eb: Status 404 returned error can't find the container with id 67fb03296890b1d3dee1c1d15b394457852f6e095f919bb8bd19ffc024b462eb Feb 19 00:21:32 crc kubenswrapper[5108]: I0219 00:21:32.850544 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" event={"ID":"bdf69b0b-3608-4252-9290-a0e77f5c73ca","Type":"ContainerStarted","Data":"67fb03296890b1d3dee1c1d15b394457852f6e095f919bb8bd19ffc024b462eb"} Feb 19 00:21:32 crc kubenswrapper[5108]: I0219 00:21:32.851853 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" event={"ID":"1810224b-992d-40ff-a9ed-d20d16b843e4","Type":"ContainerStarted","Data":"cbb43eec51d0e7f7d35dcfc4cfd1a0d37a2f4a58b109ab6679c88a1acba16cc4"} Feb 19 00:21:32 crc kubenswrapper[5108]: I0219 00:21:32.853674 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"651a531d-5946-47ac-95dc-3ad3f9f3b459","Type":"ContainerStarted","Data":"1060225b44c9a4f37fd03305c221171390db88aa44bed25f4fc8131db31d6df8"} Feb 19 00:21:32 crc kubenswrapper[5108]: I0219 00:21:32.989701 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:21:33 crc kubenswrapper[5108]: I0219 00:21:33.037917 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:21:34 crc kubenswrapper[5108]: I0219 00:21:34.869482 5108 generic.go:358] "Generic (PLEG): container finished" podID="651a531d-5946-47ac-95dc-3ad3f9f3b459" containerID="1060225b44c9a4f37fd03305c221171390db88aa44bed25f4fc8131db31d6df8" exitCode=0 Feb 19 00:21:34 crc kubenswrapper[5108]: I0219 00:21:34.869593 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"651a531d-5946-47ac-95dc-3ad3f9f3b459","Type":"ContainerDied","Data":"1060225b44c9a4f37fd03305c221171390db88aa44bed25f4fc8131db31d6df8"} Feb 19 00:21:35 crc kubenswrapper[5108]: I0219 00:21:35.878742 5108 generic.go:358] "Generic (PLEG): container finished" podID="651a531d-5946-47ac-95dc-3ad3f9f3b459" containerID="f762a09a0fe5caf1572ea196f28976356d744ff61fb0a23b6400284cb1d1c968" exitCode=0 Feb 19 00:21:35 crc kubenswrapper[5108]: I0219 00:21:35.879367 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"651a531d-5946-47ac-95dc-3ad3f9f3b459","Type":"ContainerDied","Data":"f762a09a0fe5caf1572ea196f28976356d744ff61fb0a23b6400284cb1d1c968"} Feb 19 00:21:36 crc kubenswrapper[5108]: I0219 00:21:36.886378 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"651a531d-5946-47ac-95dc-3ad3f9f3b459","Type":"ContainerStarted","Data":"c222fb7cfee4ad5008d3b9d82988a93f502781a86a2bc4e094092232522faad8"} Feb 19 00:21:36 crc kubenswrapper[5108]: I0219 00:21:36.886728 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:36 crc kubenswrapper[5108]: I0219 00:21:36.888797 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" event={"ID":"bdf69b0b-3608-4252-9290-a0e77f5c73ca","Type":"ContainerStarted","Data":"8ffc22fdf8053d5ef6815ac3536041aded7686778ebf62e3dcaca0304209e316"} Feb 19 00:21:36 crc kubenswrapper[5108]: I0219 00:21:36.889227 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:36 crc kubenswrapper[5108]: I0219 00:21:36.890171 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" event={"ID":"1810224b-992d-40ff-a9ed-d20d16b843e4","Type":"ContainerStarted","Data":"2335bb36ff05c40e48e27da157be4fb1fe474ed2dc06a5a5bcb86d561b70c669"} Feb 19 00:21:36 crc kubenswrapper[5108]: I0219 00:21:36.917905 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=5.678538647 podStartE2EDuration="25.917884921s" podCreationTimestamp="2026-02-19 00:21:11 +0000 UTC" firstStartedPulling="2026-02-19 00:21:12.264222336 +0000 UTC m=+731.230868644" lastFinishedPulling="2026-02-19 00:21:32.50356861 +0000 UTC m=+751.470214918" observedRunningTime="2026-02-19 00:21:36.913838449 +0000 UTC m=+755.880484767" watchObservedRunningTime="2026-02-19 00:21:36.917884921 +0000 UTC m=+755.884531229" Feb 19 00:21:36 crc kubenswrapper[5108]: I0219 00:21:36.933639 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-lccmr" podStartSLOduration=6.094946669 podStartE2EDuration="9.933621754s" podCreationTimestamp="2026-02-19 00:21:27 +0000 UTC" firstStartedPulling="2026-02-19 00:21:32.544032896 +0000 UTC m=+751.510679204" lastFinishedPulling="2026-02-19 00:21:36.382707981 +0000 UTC m=+755.349354289" observedRunningTime="2026-02-19 00:21:36.933020578 +0000 UTC m=+755.899666886" watchObservedRunningTime="2026-02-19 00:21:36.933621754 +0000 UTC m=+755.900268062" Feb 19 00:21:36 crc kubenswrapper[5108]: I0219 00:21:36.971835 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" podStartSLOduration=12.197926807 podStartE2EDuration="15.971810776s" podCreationTimestamp="2026-02-19 00:21:21 +0000 UTC" firstStartedPulling="2026-02-19 00:21:32.629041449 +0000 UTC m=+751.595687757" lastFinishedPulling="2026-02-19 00:21:36.402925418 +0000 UTC m=+755.369571726" observedRunningTime="2026-02-19 00:21:36.952727481 +0000 UTC m=+755.919373809" watchObservedRunningTime="2026-02-19 00:21:36.971810776 +0000 UTC m=+755.938457084" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.044481 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-qsbwt"] Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.053481 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-qsbwt" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.055394 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-sxl7w\"" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.062199 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-qsbwt"] Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.154205 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrc9f\" (UniqueName: \"kubernetes.io/projected/e96a0c11-ab9b-48a6-9a98-94a33b8b828d-kube-api-access-wrc9f\") pod \"cert-manager-759f64656b-qsbwt\" (UID: \"e96a0c11-ab9b-48a6-9a98-94a33b8b828d\") " pod="cert-manager/cert-manager-759f64656b-qsbwt" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.154478 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e96a0c11-ab9b-48a6-9a98-94a33b8b828d-bound-sa-token\") pod \"cert-manager-759f64656b-qsbwt\" (UID: \"e96a0c11-ab9b-48a6-9a98-94a33b8b828d\") " pod="cert-manager/cert-manager-759f64656b-qsbwt" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.255839 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e96a0c11-ab9b-48a6-9a98-94a33b8b828d-bound-sa-token\") pod \"cert-manager-759f64656b-qsbwt\" (UID: \"e96a0c11-ab9b-48a6-9a98-94a33b8b828d\") " pod="cert-manager/cert-manager-759f64656b-qsbwt" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.255990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrc9f\" (UniqueName: \"kubernetes.io/projected/e96a0c11-ab9b-48a6-9a98-94a33b8b828d-kube-api-access-wrc9f\") pod \"cert-manager-759f64656b-qsbwt\" (UID: \"e96a0c11-ab9b-48a6-9a98-94a33b8b828d\") " pod="cert-manager/cert-manager-759f64656b-qsbwt" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.281439 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e96a0c11-ab9b-48a6-9a98-94a33b8b828d-bound-sa-token\") pod \"cert-manager-759f64656b-qsbwt\" (UID: \"e96a0c11-ab9b-48a6-9a98-94a33b8b828d\") " pod="cert-manager/cert-manager-759f64656b-qsbwt" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.282265 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrc9f\" (UniqueName: \"kubernetes.io/projected/e96a0c11-ab9b-48a6-9a98-94a33b8b828d-kube-api-access-wrc9f\") pod \"cert-manager-759f64656b-qsbwt\" (UID: \"e96a0c11-ab9b-48a6-9a98-94a33b8b828d\") " pod="cert-manager/cert-manager-759f64656b-qsbwt" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.370060 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-qsbwt" Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.562915 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-qsbwt"] Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.918053 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-qsbwt" event={"ID":"e96a0c11-ab9b-48a6-9a98-94a33b8b828d","Type":"ContainerStarted","Data":"6c6f741faf9324614da83296ff828192a580e9e5b6b3168d38ff6beba9b5b775"} Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.918162 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-qsbwt" event={"ID":"e96a0c11-ab9b-48a6-9a98-94a33b8b828d","Type":"ContainerStarted","Data":"f3f2ad97cb7cbcd2bc61a360be4c6d411b7a061bd32d987ad644c2a799b73c2a"} Feb 19 00:21:40 crc kubenswrapper[5108]: I0219 00:21:40.938920 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-qsbwt" podStartSLOduration=0.938899222 podStartE2EDuration="938.899222ms" podCreationTimestamp="2026-02-19 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:21:40.938384367 +0000 UTC m=+759.905030685" watchObservedRunningTime="2026-02-19 00:21:40.938899222 +0000 UTC m=+759.905545540" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.756087 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.763268 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.766163 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.766178 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.766212 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.766219 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.772235 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894260 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894306 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894349 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894389 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894442 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894551 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894686 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxwcq\" (UniqueName: \"kubernetes.io/projected/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-kube-api-access-nxwcq\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894752 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894789 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894835 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894903 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.894956 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.996745 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.996829 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.996887 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.996931 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.997048 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.997061 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.997116 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.997218 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.997286 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.997355 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.997405 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.997423 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nxwcq\" (UniqueName: \"kubernetes.io/projected/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-kube-api-access-nxwcq\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.998013 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.998062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.998121 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.998097 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.998198 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.998625 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.999004 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.999088 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:42 crc kubenswrapper[5108]: I0219 00:21:42.999397 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:43 crc kubenswrapper[5108]: I0219 00:21:43.014200 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:43 crc kubenswrapper[5108]: I0219 00:21:43.017137 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:43 crc kubenswrapper[5108]: I0219 00:21:43.018219 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxwcq\" (UniqueName: \"kubernetes.io/projected/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-kube-api-access-nxwcq\") pod \"service-telemetry-operator-1-build\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:43 crc kubenswrapper[5108]: I0219 00:21:43.086044 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:43 crc kubenswrapper[5108]: I0219 00:21:43.606194 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:21:43 crc kubenswrapper[5108]: I0219 00:21:43.907371 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-k2bvb" Feb 19 00:21:43 crc kubenswrapper[5108]: I0219 00:21:43.937480 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d5d5c3a7-02fd-474e-8a45-8386fe6cee17","Type":"ContainerStarted","Data":"89f082989e85738a70e6fecc2082473d93430c1e32e44c9b93a4d1b390341b33"} Feb 19 00:21:47 crc kubenswrapper[5108]: I0219 00:21:47.995182 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="651a531d-5946-47ac-95dc-3ad3f9f3b459" containerName="elasticsearch" probeResult="failure" output=< Feb 19 00:21:47 crc kubenswrapper[5108]: {"timestamp": "2026-02-19T00:21:47+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 19 00:21:47 crc kubenswrapper[5108]: > Feb 19 00:21:48 crc kubenswrapper[5108]: I0219 00:21:48.971856 5108 generic.go:358] "Generic (PLEG): container finished" podID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" containerID="3bb6a7da0fa576f7e545220a5fec51ccf368e7d2f2395e1d3fd2d444cc021aec" exitCode=0 Feb 19 00:21:48 crc kubenswrapper[5108]: I0219 00:21:48.971919 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d5d5c3a7-02fd-474e-8a45-8386fe6cee17","Type":"ContainerDied","Data":"3bb6a7da0fa576f7e545220a5fec51ccf368e7d2f2395e1d3fd2d444cc021aec"} Feb 19 00:21:49 crc kubenswrapper[5108]: I0219 00:21:49.982411 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d5d5c3a7-02fd-474e-8a45-8386fe6cee17","Type":"ContainerStarted","Data":"0adeabfb3806e38c663b937b6ade1f343a1346b03c61a84df6cdf934dbb6a7cc"} Feb 19 00:21:50 crc kubenswrapper[5108]: I0219 00:21:50.030996 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=3.79759884 podStartE2EDuration="8.030931532s" podCreationTimestamp="2026-02-19 00:21:42 +0000 UTC" firstStartedPulling="2026-02-19 00:21:43.613091334 +0000 UTC m=+762.579737642" lastFinishedPulling="2026-02-19 00:21:47.846424026 +0000 UTC m=+766.813070334" observedRunningTime="2026-02-19 00:21:50.023490437 +0000 UTC m=+768.990136745" watchObservedRunningTime="2026-02-19 00:21:50.030931532 +0000 UTC m=+768.997577880" Feb 19 00:21:53 crc kubenswrapper[5108]: I0219 00:21:53.121527 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:53 crc kubenswrapper[5108]: I0219 00:21:53.310730 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:21:53 crc kubenswrapper[5108]: I0219 00:21:53.311012 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" containerName="docker-build" containerID="cri-o://0adeabfb3806e38c663b937b6ade1f343a1346b03c61a84df6cdf934dbb6a7cc" gracePeriod=30 Feb 19 00:21:54 crc kubenswrapper[5108]: I0219 00:21:54.883165 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.512488 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.513100 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.517086 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.517086 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.517185 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.583436 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.583556 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.583614 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.583679 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjthz\" (UniqueName: \"kubernetes.io/projected/eba442e3-f184-4038-a258-078e62c2eec6-kube-api-access-fjthz\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.583737 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.583832 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.584011 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.584094 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.584149 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.584235 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.584320 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.584392 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.686861 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687048 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687079 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687115 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687264 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687366 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687428 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687520 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687611 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687669 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687728 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjthz\" (UniqueName: \"kubernetes.io/projected/eba442e3-f184-4038-a258-078e62c2eec6-kube-api-access-fjthz\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687779 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.687886 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.688652 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.688671 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.690333 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.690828 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.691660 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.692155 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.692481 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.692991 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.696470 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.696762 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.719058 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjthz\" (UniqueName: \"kubernetes.io/projected/eba442e3-f184-4038-a258-078e62c2eec6-kube-api-access-fjthz\") pod \"service-telemetry-operator-2-build\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:55 crc kubenswrapper[5108]: I0219 00:21:55.849358 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.035345 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_d5d5c3a7-02fd-474e-8a45-8386fe6cee17/docker-build/0.log" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.036334 5108 generic.go:358] "Generic (PLEG): container finished" podID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" containerID="0adeabfb3806e38c663b937b6ade1f343a1346b03c61a84df6cdf934dbb6a7cc" exitCode=1 Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.036379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d5d5c3a7-02fd-474e-8a45-8386fe6cee17","Type":"ContainerDied","Data":"0adeabfb3806e38c663b937b6ade1f343a1346b03c61a84df6cdf934dbb6a7cc"} Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.508276 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:21:56 crc kubenswrapper[5108]: W0219 00:21:56.512474 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeba442e3_f184_4038_a258_078e62c2eec6.slice/crio-07cc3f8e84fd2be28d706f9ce49aad8af9f83ee89fcf9f43a540c3c1ffe92714 WatchSource:0}: Error finding container 07cc3f8e84fd2be28d706f9ce49aad8af9f83ee89fcf9f43a540c3c1ffe92714: Status 404 returned error can't find the container with id 07cc3f8e84fd2be28d706f9ce49aad8af9f83ee89fcf9f43a540c3c1ffe92714 Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.603332 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_d5d5c3a7-02fd-474e-8a45-8386fe6cee17/docker-build/0.log" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.603836 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703430 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-proxy-ca-bundles\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703505 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-root\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703563 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-system-configs\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703601 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildworkdir\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703659 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-pull\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703721 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildcachedir\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703741 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxwcq\" (UniqueName: \"kubernetes.io/projected/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-kube-api-access-nxwcq\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703764 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-node-pullsecrets\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703797 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-blob-cache\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-push\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703844 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-run\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.703878 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-ca-bundles\") pod \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\" (UID: \"d5d5c3a7-02fd-474e-8a45-8386fe6cee17\") " Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.704312 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.704909 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.704942 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.705437 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.705545 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.705838 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.705838 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.705844 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.706130 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.711632 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-kube-api-access-nxwcq" (OuterVolumeSpecName: "kube-api-access-nxwcq") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "kube-api-access-nxwcq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.711682 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.714073 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "d5d5c3a7-02fd-474e-8a45-8386fe6cee17" (UID: "d5d5c3a7-02fd-474e-8a45-8386fe6cee17"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.805551 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806389 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806407 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806420 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806433 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806445 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806456 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nxwcq\" (UniqueName: \"kubernetes.io/projected/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-kube-api-access-nxwcq\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806467 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806479 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806490 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806501 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:56 crc kubenswrapper[5108]: I0219 00:21:56.806512 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d5c3a7-02fd-474e-8a45-8386fe6cee17-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.042866 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_d5d5c3a7-02fd-474e-8a45-8386fe6cee17/docker-build/0.log" Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.043329 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.043381 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d5d5c3a7-02fd-474e-8a45-8386fe6cee17","Type":"ContainerDied","Data":"89f082989e85738a70e6fecc2082473d93430c1e32e44c9b93a4d1b390341b33"} Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.043442 5108 scope.go:117] "RemoveContainer" containerID="0adeabfb3806e38c663b937b6ade1f343a1346b03c61a84df6cdf934dbb6a7cc" Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.046634 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"eba442e3-f184-4038-a258-078e62c2eec6","Type":"ContainerStarted","Data":"488dbeec3920555a87ec3f59e658de44bd48ff12dcd033d2be551b1e44b184c7"} Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.046660 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"eba442e3-f184-4038-a258-078e62c2eec6","Type":"ContainerStarted","Data":"07cc3f8e84fd2be28d706f9ce49aad8af9f83ee89fcf9f43a540c3c1ffe92714"} Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.070682 5108 scope.go:117] "RemoveContainer" containerID="3bb6a7da0fa576f7e545220a5fec51ccf368e7d2f2395e1d3fd2d444cc021aec" Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.113044 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.120871 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:21:57 crc kubenswrapper[5108]: I0219 00:21:57.855270 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" path="/var/lib/kubelet/pods/d5d5c3a7-02fd-474e-8a45-8386fe6cee17/volumes" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.139990 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524342-6qjdp"] Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.140793 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" containerName="manage-dockerfile" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.140813 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" containerName="manage-dockerfile" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.140844 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" containerName="docker-build" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.140853 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" containerName="docker-build" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.141002 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5d5c3a7-02fd-474e-8a45-8386fe6cee17" containerName="docker-build" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.144558 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524342-6qjdp" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.148535 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.148967 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.150157 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524342-6qjdp"] Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.150481 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.252439 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmmh\" (UniqueName: \"kubernetes.io/projected/1f240172-d316-44ed-abb7-0ecc623b7967-kube-api-access-pmmmh\") pod \"auto-csr-approver-29524342-6qjdp\" (UID: \"1f240172-d316-44ed-abb7-0ecc623b7967\") " pod="openshift-infra/auto-csr-approver-29524342-6qjdp" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.353751 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmmh\" (UniqueName: \"kubernetes.io/projected/1f240172-d316-44ed-abb7-0ecc623b7967-kube-api-access-pmmmh\") pod \"auto-csr-approver-29524342-6qjdp\" (UID: \"1f240172-d316-44ed-abb7-0ecc623b7967\") " pod="openshift-infra/auto-csr-approver-29524342-6qjdp" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.382594 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmmh\" (UniqueName: \"kubernetes.io/projected/1f240172-d316-44ed-abb7-0ecc623b7967-kube-api-access-pmmmh\") pod \"auto-csr-approver-29524342-6qjdp\" (UID: \"1f240172-d316-44ed-abb7-0ecc623b7967\") " pod="openshift-infra/auto-csr-approver-29524342-6qjdp" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.460870 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524342-6qjdp" Feb 19 00:22:00 crc kubenswrapper[5108]: I0219 00:22:00.716659 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524342-6qjdp"] Feb 19 00:22:01 crc kubenswrapper[5108]: I0219 00:22:01.071046 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524342-6qjdp" event={"ID":"1f240172-d316-44ed-abb7-0ecc623b7967","Type":"ContainerStarted","Data":"1042b51cd978503dbc09c386c33b3bdf55a900dc4083a551c42e8521dd1b9afb"} Feb 19 00:22:02 crc kubenswrapper[5108]: I0219 00:22:02.080360 5108 generic.go:358] "Generic (PLEG): container finished" podID="1f240172-d316-44ed-abb7-0ecc623b7967" containerID="56988224e9acaf6bbd3324c3ccc10d5ccbeb291f60c19546eea6abfeb1995016" exitCode=0 Feb 19 00:22:02 crc kubenswrapper[5108]: I0219 00:22:02.080462 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524342-6qjdp" event={"ID":"1f240172-d316-44ed-abb7-0ecc623b7967","Type":"ContainerDied","Data":"56988224e9acaf6bbd3324c3ccc10d5ccbeb291f60c19546eea6abfeb1995016"} Feb 19 00:22:03 crc kubenswrapper[5108]: I0219 00:22:03.328230 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524342-6qjdp" Feb 19 00:22:03 crc kubenswrapper[5108]: I0219 00:22:03.402134 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmmmh\" (UniqueName: \"kubernetes.io/projected/1f240172-d316-44ed-abb7-0ecc623b7967-kube-api-access-pmmmh\") pod \"1f240172-d316-44ed-abb7-0ecc623b7967\" (UID: \"1f240172-d316-44ed-abb7-0ecc623b7967\") " Feb 19 00:22:03 crc kubenswrapper[5108]: I0219 00:22:03.414573 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f240172-d316-44ed-abb7-0ecc623b7967-kube-api-access-pmmmh" (OuterVolumeSpecName: "kube-api-access-pmmmh") pod "1f240172-d316-44ed-abb7-0ecc623b7967" (UID: "1f240172-d316-44ed-abb7-0ecc623b7967"). InnerVolumeSpecName "kube-api-access-pmmmh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:22:03 crc kubenswrapper[5108]: I0219 00:22:03.504151 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pmmmh\" (UniqueName: \"kubernetes.io/projected/1f240172-d316-44ed-abb7-0ecc623b7967-kube-api-access-pmmmh\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:04 crc kubenswrapper[5108]: I0219 00:22:04.104009 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524342-6qjdp" Feb 19 00:22:04 crc kubenswrapper[5108]: I0219 00:22:04.104546 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524342-6qjdp" event={"ID":"1f240172-d316-44ed-abb7-0ecc623b7967","Type":"ContainerDied","Data":"1042b51cd978503dbc09c386c33b3bdf55a900dc4083a551c42e8521dd1b9afb"} Feb 19 00:22:04 crc kubenswrapper[5108]: I0219 00:22:04.105596 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1042b51cd978503dbc09c386c33b3bdf55a900dc4083a551c42e8521dd1b9afb" Feb 19 00:22:04 crc kubenswrapper[5108]: I0219 00:22:04.376443 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524336-74wgt"] Feb 19 00:22:04 crc kubenswrapper[5108]: I0219 00:22:04.390091 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524336-74wgt"] Feb 19 00:22:04 crc kubenswrapper[5108]: E0219 00:22:04.902358 5108 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.234:48110->38.102.83.234:33243: write tcp 38.102.83.234:48110->38.102.83.234:33243: write: broken pipe Feb 19 00:22:05 crc kubenswrapper[5108]: I0219 00:22:05.111304 5108 generic.go:358] "Generic (PLEG): container finished" podID="eba442e3-f184-4038-a258-078e62c2eec6" containerID="488dbeec3920555a87ec3f59e658de44bd48ff12dcd033d2be551b1e44b184c7" exitCode=0 Feb 19 00:22:05 crc kubenswrapper[5108]: I0219 00:22:05.111354 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"eba442e3-f184-4038-a258-078e62c2eec6","Type":"ContainerDied","Data":"488dbeec3920555a87ec3f59e658de44bd48ff12dcd033d2be551b1e44b184c7"} Feb 19 00:22:05 crc kubenswrapper[5108]: I0219 00:22:05.868201 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f1665b-5fcf-4742-bb14-9479d30e37bc" path="/var/lib/kubelet/pods/e3f1665b-5fcf-4742-bb14-9479d30e37bc/volumes" Feb 19 00:22:06 crc kubenswrapper[5108]: I0219 00:22:06.122855 5108 generic.go:358] "Generic (PLEG): container finished" podID="eba442e3-f184-4038-a258-078e62c2eec6" containerID="d4f408c3d340556ce20bb181d18021365f56e431a1c63355ac003b7c0c129f3e" exitCode=0 Feb 19 00:22:06 crc kubenswrapper[5108]: I0219 00:22:06.122990 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"eba442e3-f184-4038-a258-078e62c2eec6","Type":"ContainerDied","Data":"d4f408c3d340556ce20bb181d18021365f56e431a1c63355ac003b7c0c129f3e"} Feb 19 00:22:06 crc kubenswrapper[5108]: I0219 00:22:06.145171 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:22:06 crc kubenswrapper[5108]: I0219 00:22:06.145247 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:22:06 crc kubenswrapper[5108]: I0219 00:22:06.165136 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_eba442e3-f184-4038-a258-078e62c2eec6/manage-dockerfile/0.log" Feb 19 00:22:07 crc kubenswrapper[5108]: I0219 00:22:07.137073 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"eba442e3-f184-4038-a258-078e62c2eec6","Type":"ContainerStarted","Data":"f0ea69144496c2d31d9135f4f5a45ad36d8c10c17074f0dd726dd9b33af959cc"} Feb 19 00:22:07 crc kubenswrapper[5108]: I0219 00:22:07.183524 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=13.183487265 podStartE2EDuration="13.183487265s" podCreationTimestamp="2026-02-19 00:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:22:07.171439287 +0000 UTC m=+786.138085665" watchObservedRunningTime="2026-02-19 00:22:07.183487265 +0000 UTC m=+786.150133633" Feb 19 00:22:09 crc kubenswrapper[5108]: I0219 00:22:09.402255 5108 scope.go:117] "RemoveContainer" containerID="4bcac3ca558642286f4500ba772d580f9025584bb612c5589709a276d4d591f3" Feb 19 00:22:36 crc kubenswrapper[5108]: I0219 00:22:36.144767 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:22:36 crc kubenswrapper[5108]: I0219 00:22:36.145540 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:23:06 crc kubenswrapper[5108]: I0219 00:23:06.145611 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:23:06 crc kubenswrapper[5108]: I0219 00:23:06.146267 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:23:06 crc kubenswrapper[5108]: I0219 00:23:06.146336 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:23:06 crc kubenswrapper[5108]: I0219 00:23:06.147236 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"093eaa062e1910cafbd3717e66d83cae43e8cdac075555e5e894e1a4f83c28e4"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:23:06 crc kubenswrapper[5108]: I0219 00:23:06.147339 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://093eaa062e1910cafbd3717e66d83cae43e8cdac075555e5e894e1a4f83c28e4" gracePeriod=600 Feb 19 00:23:06 crc kubenswrapper[5108]: I0219 00:23:06.616233 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="093eaa062e1910cafbd3717e66d83cae43e8cdac075555e5e894e1a4f83c28e4" exitCode=0 Feb 19 00:23:06 crc kubenswrapper[5108]: I0219 00:23:06.616310 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"093eaa062e1910cafbd3717e66d83cae43e8cdac075555e5e894e1a4f83c28e4"} Feb 19 00:23:06 crc kubenswrapper[5108]: I0219 00:23:06.616378 5108 scope.go:117] "RemoveContainer" containerID="647ea21ed953812ffffbc73a5fd69b26af2cf7eb9e570947d57dd504f152834c" Feb 19 00:23:07 crc kubenswrapper[5108]: I0219 00:23:07.629281 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"4cfa792aa453a077c6bdecc7c8970848374d26dd08be250811638f0ac93b7f02"} Feb 19 00:23:32 crc kubenswrapper[5108]: I0219 00:23:32.813015 5108 generic.go:358] "Generic (PLEG): container finished" podID="eba442e3-f184-4038-a258-078e62c2eec6" containerID="f0ea69144496c2d31d9135f4f5a45ad36d8c10c17074f0dd726dd9b33af959cc" exitCode=0 Feb 19 00:23:32 crc kubenswrapper[5108]: I0219 00:23:32.813668 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"eba442e3-f184-4038-a258-078e62c2eec6","Type":"ContainerDied","Data":"f0ea69144496c2d31d9135f4f5a45ad36d8c10c17074f0dd726dd9b33af959cc"} Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.147903 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227339 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-push\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227474 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-pull\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227526 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-proxy-ca-bundles\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227559 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-buildcachedir\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-root\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227610 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-node-pullsecrets\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227676 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-build-blob-cache\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227682 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227784 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-buildworkdir\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227822 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjthz\" (UniqueName: \"kubernetes.io/projected/eba442e3-f184-4038-a258-078e62c2eec6-kube-api-access-fjthz\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227843 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-ca-bundles\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227864 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-run\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.227880 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-system-configs\") pod \"eba442e3-f184-4038-a258-078e62c2eec6\" (UID: \"eba442e3-f184-4038-a258-078e62c2eec6\") " Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.228125 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.228400 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.228416 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eba442e3-f184-4038-a258-078e62c2eec6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.228735 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.228968 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.229125 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.229525 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.241706 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.241690 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.241853 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba442e3-f184-4038-a258-078e62c2eec6-kube-api-access-fjthz" (OuterVolumeSpecName: "kube-api-access-fjthz") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "kube-api-access-fjthz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.262917 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.331745 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.331791 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.331804 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.331816 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/eba442e3-f184-4038-a258-078e62c2eec6-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.331828 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.331840 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.331852 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fjthz\" (UniqueName: \"kubernetes.io/projected/eba442e3-f184-4038-a258-078e62c2eec6-kube-api-access-fjthz\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.331864 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eba442e3-f184-4038-a258-078e62c2eec6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.406461 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.434153 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.833124 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"eba442e3-f184-4038-a258-078e62c2eec6","Type":"ContainerDied","Data":"07cc3f8e84fd2be28d706f9ce49aad8af9f83ee89fcf9f43a540c3c1ffe92714"} Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.833477 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07cc3f8e84fd2be28d706f9ce49aad8af9f83ee89fcf9f43a540c3c1ffe92714" Feb 19 00:23:34 crc kubenswrapper[5108]: I0219 00:23:34.833571 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:23:36 crc kubenswrapper[5108]: I0219 00:23:36.794570 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "eba442e3-f184-4038-a258-078e62c2eec6" (UID: "eba442e3-f184-4038-a258-078e62c2eec6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:23:36 crc kubenswrapper[5108]: I0219 00:23:36.867579 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eba442e3-f184-4038-a258-078e62c2eec6-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.976536 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978107 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eba442e3-f184-4038-a258-078e62c2eec6" containerName="manage-dockerfile" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978218 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba442e3-f184-4038-a258-078e62c2eec6" containerName="manage-dockerfile" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978300 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f240172-d316-44ed-abb7-0ecc623b7967" containerName="oc" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978367 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f240172-d316-44ed-abb7-0ecc623b7967" containerName="oc" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978435 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eba442e3-f184-4038-a258-078e62c2eec6" containerName="git-clone" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978547 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba442e3-f184-4038-a258-078e62c2eec6" containerName="git-clone" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978628 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eba442e3-f184-4038-a258-078e62c2eec6" containerName="docker-build" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978689 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba442e3-f184-4038-a258-078e62c2eec6" containerName="docker-build" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978870 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f240172-d316-44ed-abb7-0ecc623b7967" containerName="oc" Feb 19 00:23:38 crc kubenswrapper[5108]: I0219 00:23:38.978973 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="eba442e3-f184-4038-a258-078e62c2eec6" containerName="docker-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.021805 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.021950 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.028279 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.028340 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.028380 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.028345 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100106 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100177 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100228 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100267 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100439 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100509 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkvnn\" (UniqueName: \"kubernetes.io/projected/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-kube-api-access-zkvnn\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100553 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100572 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100598 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100622 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100641 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.100670 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.201970 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202011 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202031 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202052 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202208 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202238 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202270 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202305 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202334 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkvnn\" (UniqueName: \"kubernetes.io/projected/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-kube-api-access-zkvnn\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202357 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202371 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202441 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202558 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202579 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202658 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202725 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202831 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.202913 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.203247 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.203369 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.210475 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.210597 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.221423 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkvnn\" (UniqueName: \"kubernetes.io/projected/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-kube-api-access-zkvnn\") pod \"smart-gateway-operator-1-build\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.336960 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.753213 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:23:39 crc kubenswrapper[5108]: I0219 00:23:39.885605 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9","Type":"ContainerStarted","Data":"edfb4e47befaac17069b9c2e9cf6174566c83d92f6d65de684e1f14176a27870"} Feb 19 00:23:40 crc kubenswrapper[5108]: I0219 00:23:40.894241 5108 generic.go:358] "Generic (PLEG): container finished" podID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" containerID="3cba5779772f7bd7e5586afad7edbf096a7359f395dde8d1a9452b74d5a84fc4" exitCode=0 Feb 19 00:23:40 crc kubenswrapper[5108]: I0219 00:23:40.894349 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9","Type":"ContainerDied","Data":"3cba5779772f7bd7e5586afad7edbf096a7359f395dde8d1a9452b74d5a84fc4"} Feb 19 00:23:41 crc kubenswrapper[5108]: I0219 00:23:41.903196 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9","Type":"ContainerStarted","Data":"6e9bc81ef822be5d4c37d83d10efc82c5f8abc8ee8fa983d6683050cc92af07d"} Feb 19 00:23:41 crc kubenswrapper[5108]: I0219 00:23:41.937294 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.937261255 podStartE2EDuration="3.937261255s" podCreationTimestamp="2026-02-19 00:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:23:41.932412597 +0000 UTC m=+880.899059005" watchObservedRunningTime="2026-02-19 00:23:41.937261255 +0000 UTC m=+880.903907583" Feb 19 00:23:49 crc kubenswrapper[5108]: I0219 00:23:49.802425 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:23:49 crc kubenswrapper[5108]: I0219 00:23:49.803476 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" containerName="docker-build" containerID="cri-o://6e9bc81ef822be5d4c37d83d10efc82c5f8abc8ee8fa983d6683050cc92af07d" gracePeriod=30 Feb 19 00:23:50 crc kubenswrapper[5108]: I0219 00:23:50.965878 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_60c3cc3c-b2c9-48d8-8e26-25900f93c5b9/docker-build/0.log" Feb 19 00:23:50 crc kubenswrapper[5108]: I0219 00:23:50.966964 5108 generic.go:358] "Generic (PLEG): container finished" podID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" containerID="6e9bc81ef822be5d4c37d83d10efc82c5f8abc8ee8fa983d6683050cc92af07d" exitCode=1 Feb 19 00:23:50 crc kubenswrapper[5108]: I0219 00:23:50.967102 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9","Type":"ContainerDied","Data":"6e9bc81ef822be5d4c37d83d10efc82c5f8abc8ee8fa983d6683050cc92af07d"} Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.371818 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_60c3cc3c-b2c9-48d8-8e26-25900f93c5b9/docker-build/0.log" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.372637 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.438396 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.439047 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" containerName="manage-dockerfile" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.439065 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" containerName="manage-dockerfile" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.439078 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" containerName="docker-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.439084 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" containerName="docker-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.439187 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" containerName="docker-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.443508 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.445362 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.446983 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.447005 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.466759 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475125 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildworkdir\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475173 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-blob-cache\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475254 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-proxy-ca-bundles\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475274 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvnn\" (UniqueName: \"kubernetes.io/projected/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-kube-api-access-zkvnn\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475303 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-run\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475324 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-push\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475341 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-ca-bundles\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475382 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildcachedir\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475423 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-system-configs\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475448 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-root\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475475 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-pull\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475555 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-node-pullsecrets\") pod \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\" (UID: \"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9\") " Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475547 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475767 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.475802 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.476318 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.477470 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.477496 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.478017 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.478384 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.478426 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.483465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-kube-api-access-zkvnn" (OuterVolumeSpecName: "kube-api-access-zkvnn") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "kube-api-access-zkvnn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.487102 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.488069 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577495 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577562 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrpp\" (UniqueName: \"kubernetes.io/projected/8b1690be-61bb-4599-8c43-bc42c460fae6-kube-api-access-jfrpp\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577610 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577643 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577734 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577842 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577860 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577876 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577892 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.577980 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578031 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578060 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578153 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578173 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578184 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578195 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkvnn\" (UniqueName: \"kubernetes.io/projected/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-kube-api-access-zkvnn\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578244 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578263 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578277 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578289 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578301 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.578313 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.628281 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" (UID: "60c3cc3c-b2c9-48d8-8e26-25900f93c5b9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680085 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680175 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680217 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680252 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrpp\" (UniqueName: \"kubernetes.io/projected/8b1690be-61bb-4599-8c43-bc42c460fae6-kube-api-access-jfrpp\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680298 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680362 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680490 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680551 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680610 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680626 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680652 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680702 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.680831 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.681633 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.681748 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.681846 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.681891 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.681995 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.682552 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.683335 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.684817 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.685987 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.690211 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.705544 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrpp\" (UniqueName: \"kubernetes.io/projected/8b1690be-61bb-4599-8c43-bc42c460fae6-kube-api-access-jfrpp\") pod \"smart-gateway-operator-2-build\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.763821 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.977640 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_60c3cc3c-b2c9-48d8-8e26-25900f93c5b9/docker-build/0.log" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.978600 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"60c3cc3c-b2c9-48d8-8e26-25900f93c5b9","Type":"ContainerDied","Data":"edfb4e47befaac17069b9c2e9cf6174566c83d92f6d65de684e1f14176a27870"} Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.978662 5108 scope.go:117] "RemoveContainer" containerID="6e9bc81ef822be5d4c37d83d10efc82c5f8abc8ee8fa983d6683050cc92af07d" Feb 19 00:23:51 crc kubenswrapper[5108]: I0219 00:23:51.978880 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:23:52 crc kubenswrapper[5108]: I0219 00:23:52.009143 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:23:52 crc kubenswrapper[5108]: I0219 00:23:52.013290 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:23:52 crc kubenswrapper[5108]: I0219 00:23:52.027443 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 19 00:23:52 crc kubenswrapper[5108]: I0219 00:23:52.068196 5108 scope.go:117] "RemoveContainer" containerID="3cba5779772f7bd7e5586afad7edbf096a7359f395dde8d1a9452b74d5a84fc4" Feb 19 00:23:52 crc kubenswrapper[5108]: W0219 00:23:52.075393 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1690be_61bb_4599_8c43_bc42c460fae6.slice/crio-fd1c117e57dd619fde9d2d82d3dc63fbf8762f1639fe5585f82e5510779a1882 WatchSource:0}: Error finding container fd1c117e57dd619fde9d2d82d3dc63fbf8762f1639fe5585f82e5510779a1882: Status 404 returned error can't find the container with id fd1c117e57dd619fde9d2d82d3dc63fbf8762f1639fe5585f82e5510779a1882 Feb 19 00:23:52 crc kubenswrapper[5108]: I0219 00:23:52.987994 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"8b1690be-61bb-4599-8c43-bc42c460fae6","Type":"ContainerStarted","Data":"f57f0bc39a289201263fccef7735026a6c2a6334010cd9398999d70723a70107"} Feb 19 00:23:52 crc kubenswrapper[5108]: I0219 00:23:52.988337 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"8b1690be-61bb-4599-8c43-bc42c460fae6","Type":"ContainerStarted","Data":"fd1c117e57dd619fde9d2d82d3dc63fbf8762f1639fe5585f82e5510779a1882"} Feb 19 00:23:53 crc kubenswrapper[5108]: I0219 00:23:53.862760 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c3cc3c-b2c9-48d8-8e26-25900f93c5b9" path="/var/lib/kubelet/pods/60c3cc3c-b2c9-48d8-8e26-25900f93c5b9/volumes" Feb 19 00:23:54 crc kubenswrapper[5108]: I0219 00:23:54.027097 5108 generic.go:358] "Generic (PLEG): container finished" podID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerID="f57f0bc39a289201263fccef7735026a6c2a6334010cd9398999d70723a70107" exitCode=0 Feb 19 00:23:54 crc kubenswrapper[5108]: I0219 00:23:54.027199 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"8b1690be-61bb-4599-8c43-bc42c460fae6","Type":"ContainerDied","Data":"f57f0bc39a289201263fccef7735026a6c2a6334010cd9398999d70723a70107"} Feb 19 00:23:55 crc kubenswrapper[5108]: I0219 00:23:55.039666 5108 generic.go:358] "Generic (PLEG): container finished" podID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerID="8cebcd405eba159b4f6b020613f56f86e876c930da8141ca76caf2d1913c55a0" exitCode=0 Feb 19 00:23:55 crc kubenswrapper[5108]: I0219 00:23:55.039748 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"8b1690be-61bb-4599-8c43-bc42c460fae6","Type":"ContainerDied","Data":"8cebcd405eba159b4f6b020613f56f86e876c930da8141ca76caf2d1913c55a0"} Feb 19 00:23:55 crc kubenswrapper[5108]: I0219 00:23:55.092926 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_8b1690be-61bb-4599-8c43-bc42c460fae6/manage-dockerfile/0.log" Feb 19 00:23:56 crc kubenswrapper[5108]: I0219 00:23:56.053795 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"8b1690be-61bb-4599-8c43-bc42c460fae6","Type":"ContainerStarted","Data":"40d4923d84c7aa0ae240327bcd9dc14512192d1a97e62d00cd0b26019826cfef"} Feb 19 00:23:56 crc kubenswrapper[5108]: I0219 00:23:56.076105 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.076088531 podStartE2EDuration="5.076088531s" podCreationTimestamp="2026-02-19 00:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:23:56.074646613 +0000 UTC m=+895.041292931" watchObservedRunningTime="2026-02-19 00:23:56.076088531 +0000 UTC m=+895.042734829" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.141124 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524344-t5jjq"] Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.152510 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524344-t5jjq"] Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.152636 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.194274 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.194480 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.194633 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zktxc\" (UniqueName: \"kubernetes.io/projected/2e29f3d5-601c-46a8-b7a7-8732fb1137f6-kube-api-access-zktxc\") pod \"auto-csr-approver-29524344-t5jjq\" (UID: \"2e29f3d5-601c-46a8-b7a7-8732fb1137f6\") " pod="openshift-infra/auto-csr-approver-29524344-t5jjq" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.195104 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.295990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zktxc\" (UniqueName: \"kubernetes.io/projected/2e29f3d5-601c-46a8-b7a7-8732fb1137f6-kube-api-access-zktxc\") pod \"auto-csr-approver-29524344-t5jjq\" (UID: \"2e29f3d5-601c-46a8-b7a7-8732fb1137f6\") " pod="openshift-infra/auto-csr-approver-29524344-t5jjq" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.315761 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zktxc\" (UniqueName: \"kubernetes.io/projected/2e29f3d5-601c-46a8-b7a7-8732fb1137f6-kube-api-access-zktxc\") pod \"auto-csr-approver-29524344-t5jjq\" (UID: \"2e29f3d5-601c-46a8-b7a7-8732fb1137f6\") " pod="openshift-infra/auto-csr-approver-29524344-t5jjq" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.521993 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" Feb 19 00:24:00 crc kubenswrapper[5108]: I0219 00:24:00.953346 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524344-t5jjq"] Feb 19 00:24:01 crc kubenswrapper[5108]: I0219 00:24:01.091243 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" event={"ID":"2e29f3d5-601c-46a8-b7a7-8732fb1137f6","Type":"ContainerStarted","Data":"5d938516b5a2e7c4b0abe09331cd080905192e3e6b2f0a52b745d00f4ed31ec5"} Feb 19 00:24:02 crc kubenswrapper[5108]: I0219 00:24:02.253105 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:24:02 crc kubenswrapper[5108]: I0219 00:24:02.254196 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:24:02 crc kubenswrapper[5108]: I0219 00:24:02.263913 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:24:02 crc kubenswrapper[5108]: I0219 00:24:02.264363 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.009979 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v6d5c"] Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.039260 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6d5c"] Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.039452 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.104633 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" event={"ID":"2e29f3d5-601c-46a8-b7a7-8732fb1137f6","Type":"ContainerStarted","Data":"9514bb9f2625d646cf002ffa3858130d17baba4898167e8e98be95535d7e38cb"} Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.134584 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-catalog-content\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.135035 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78rqk\" (UniqueName: \"kubernetes.io/projected/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-kube-api-access-78rqk\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.135143 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-utilities\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.134583 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" podStartSLOduration=1.93143516 podStartE2EDuration="3.134561211s" podCreationTimestamp="2026-02-19 00:24:00 +0000 UTC" firstStartedPulling="2026-02-19 00:24:00.968187483 +0000 UTC m=+899.934833791" lastFinishedPulling="2026-02-19 00:24:02.171313514 +0000 UTC m=+901.137959842" observedRunningTime="2026-02-19 00:24:03.128917272 +0000 UTC m=+902.095563580" watchObservedRunningTime="2026-02-19 00:24:03.134561211 +0000 UTC m=+902.101207519" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.236318 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-utilities\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.236427 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-catalog-content\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.236492 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-78rqk\" (UniqueName: \"kubernetes.io/projected/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-kube-api-access-78rqk\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.236878 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-utilities\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.236893 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-catalog-content\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.256723 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-78rqk\" (UniqueName: \"kubernetes.io/projected/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-kube-api-access-78rqk\") pod \"redhat-operators-v6d5c\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.361135 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:03 crc kubenswrapper[5108]: I0219 00:24:03.548997 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6d5c"] Feb 19 00:24:03 crc kubenswrapper[5108]: W0219 00:24:03.567536 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4bef4f9_656a_4ea0_bef6_c244a2bf382b.slice/crio-5cfed82662998f33c5b844de0d673f671969132c586f5f5d86458108a1ce494e WatchSource:0}: Error finding container 5cfed82662998f33c5b844de0d673f671969132c586f5f5d86458108a1ce494e: Status 404 returned error can't find the container with id 5cfed82662998f33c5b844de0d673f671969132c586f5f5d86458108a1ce494e Feb 19 00:24:04 crc kubenswrapper[5108]: I0219 00:24:04.126177 5108 generic.go:358] "Generic (PLEG): container finished" podID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerID="50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4" exitCode=0 Feb 19 00:24:04 crc kubenswrapper[5108]: I0219 00:24:04.126284 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6d5c" event={"ID":"c4bef4f9-656a-4ea0-bef6-c244a2bf382b","Type":"ContainerDied","Data":"50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4"} Feb 19 00:24:04 crc kubenswrapper[5108]: I0219 00:24:04.126542 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6d5c" event={"ID":"c4bef4f9-656a-4ea0-bef6-c244a2bf382b","Type":"ContainerStarted","Data":"5cfed82662998f33c5b844de0d673f671969132c586f5f5d86458108a1ce494e"} Feb 19 00:24:05 crc kubenswrapper[5108]: I0219 00:24:05.141331 5108 generic.go:358] "Generic (PLEG): container finished" podID="2e29f3d5-601c-46a8-b7a7-8732fb1137f6" containerID="9514bb9f2625d646cf002ffa3858130d17baba4898167e8e98be95535d7e38cb" exitCode=0 Feb 19 00:24:05 crc kubenswrapper[5108]: I0219 00:24:05.141628 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" event={"ID":"2e29f3d5-601c-46a8-b7a7-8732fb1137f6","Type":"ContainerDied","Data":"9514bb9f2625d646cf002ffa3858130d17baba4898167e8e98be95535d7e38cb"} Feb 19 00:24:06 crc kubenswrapper[5108]: I0219 00:24:06.149786 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6d5c" event={"ID":"c4bef4f9-656a-4ea0-bef6-c244a2bf382b","Type":"ContainerStarted","Data":"3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8"} Feb 19 00:24:06 crc kubenswrapper[5108]: I0219 00:24:06.354230 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" Feb 19 00:24:06 crc kubenswrapper[5108]: I0219 00:24:06.385037 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zktxc\" (UniqueName: \"kubernetes.io/projected/2e29f3d5-601c-46a8-b7a7-8732fb1137f6-kube-api-access-zktxc\") pod \"2e29f3d5-601c-46a8-b7a7-8732fb1137f6\" (UID: \"2e29f3d5-601c-46a8-b7a7-8732fb1137f6\") " Feb 19 00:24:06 crc kubenswrapper[5108]: I0219 00:24:06.397255 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e29f3d5-601c-46a8-b7a7-8732fb1137f6-kube-api-access-zktxc" (OuterVolumeSpecName: "kube-api-access-zktxc") pod "2e29f3d5-601c-46a8-b7a7-8732fb1137f6" (UID: "2e29f3d5-601c-46a8-b7a7-8732fb1137f6"). InnerVolumeSpecName "kube-api-access-zktxc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:24:06 crc kubenswrapper[5108]: I0219 00:24:06.486585 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zktxc\" (UniqueName: \"kubernetes.io/projected/2e29f3d5-601c-46a8-b7a7-8732fb1137f6-kube-api-access-zktxc\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:07 crc kubenswrapper[5108]: I0219 00:24:07.154659 5108 generic.go:358] "Generic (PLEG): container finished" podID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerID="3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8" exitCode=0 Feb 19 00:24:07 crc kubenswrapper[5108]: I0219 00:24:07.155668 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6d5c" event={"ID":"c4bef4f9-656a-4ea0-bef6-c244a2bf382b","Type":"ContainerDied","Data":"3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8"} Feb 19 00:24:07 crc kubenswrapper[5108]: I0219 00:24:07.160729 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" event={"ID":"2e29f3d5-601c-46a8-b7a7-8732fb1137f6","Type":"ContainerDied","Data":"5d938516b5a2e7c4b0abe09331cd080905192e3e6b2f0a52b745d00f4ed31ec5"} Feb 19 00:24:07 crc kubenswrapper[5108]: I0219 00:24:07.160762 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d938516b5a2e7c4b0abe09331cd080905192e3e6b2f0a52b745d00f4ed31ec5" Feb 19 00:24:07 crc kubenswrapper[5108]: I0219 00:24:07.160820 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524344-t5jjq" Feb 19 00:24:07 crc kubenswrapper[5108]: I0219 00:24:07.209068 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524338-xmd4w"] Feb 19 00:24:07 crc kubenswrapper[5108]: I0219 00:24:07.217610 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524338-xmd4w"] Feb 19 00:24:07 crc kubenswrapper[5108]: I0219 00:24:07.855704 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b07faf-6463-47aa-9306-e36be1281fc5" path="/var/lib/kubelet/pods/11b07faf-6463-47aa-9306-e36be1281fc5/volumes" Feb 19 00:24:08 crc kubenswrapper[5108]: I0219 00:24:08.168229 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6d5c" event={"ID":"c4bef4f9-656a-4ea0-bef6-c244a2bf382b","Type":"ContainerStarted","Data":"014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff"} Feb 19 00:24:09 crc kubenswrapper[5108]: I0219 00:24:09.534310 5108 scope.go:117] "RemoveContainer" containerID="74e1f2b2f42d99655fc66868599149aa436a7aa2f3974ea189c471fbdbce79d7" Feb 19 00:24:13 crc kubenswrapper[5108]: I0219 00:24:13.362160 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:13 crc kubenswrapper[5108]: I0219 00:24:13.362564 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:13 crc kubenswrapper[5108]: I0219 00:24:13.406161 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:13 crc kubenswrapper[5108]: I0219 00:24:13.433568 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v6d5c" podStartSLOduration=9.701784244 podStartE2EDuration="11.433540952s" podCreationTimestamp="2026-02-19 00:24:02 +0000 UTC" firstStartedPulling="2026-02-19 00:24:04.127244393 +0000 UTC m=+903.093890711" lastFinishedPulling="2026-02-19 00:24:05.859001111 +0000 UTC m=+904.825647419" observedRunningTime="2026-02-19 00:24:08.18742765 +0000 UTC m=+907.154073968" watchObservedRunningTime="2026-02-19 00:24:13.433540952 +0000 UTC m=+912.400187300" Feb 19 00:24:14 crc kubenswrapper[5108]: I0219 00:24:14.267444 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:14 crc kubenswrapper[5108]: I0219 00:24:14.314306 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6d5c"] Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.049055 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mpbdt"] Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.050112 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e29f3d5-601c-46a8-b7a7-8732fb1137f6" containerName="oc" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.050130 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e29f3d5-601c-46a8-b7a7-8732fb1137f6" containerName="oc" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.050294 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e29f3d5-601c-46a8-b7a7-8732fb1137f6" containerName="oc" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.130520 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mpbdt"] Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.130640 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.219156 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-catalog-content\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.219207 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-utilities\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.219347 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdk8\" (UniqueName: \"kubernetes.io/projected/aa42622c-959b-45af-9871-70b3922add2d-kube-api-access-5fdk8\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.223434 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v6d5c" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerName="registry-server" containerID="cri-o://014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff" gracePeriod=2 Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.320415 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-catalog-content\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.320482 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-utilities\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.320526 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdk8\" (UniqueName: \"kubernetes.io/projected/aa42622c-959b-45af-9871-70b3922add2d-kube-api-access-5fdk8\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.321357 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-catalog-content\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.321749 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-utilities\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.342768 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdk8\" (UniqueName: \"kubernetes.io/projected/aa42622c-959b-45af-9871-70b3922add2d-kube-api-access-5fdk8\") pod \"community-operators-mpbdt\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.450260 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:16 crc kubenswrapper[5108]: I0219 00:24:16.962447 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mpbdt"] Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.127189 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.232945 5108 generic.go:358] "Generic (PLEG): container finished" podID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerID="014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff" exitCode=0 Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.232996 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6d5c" event={"ID":"c4bef4f9-656a-4ea0-bef6-c244a2bf382b","Type":"ContainerDied","Data":"014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff"} Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.233380 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6d5c" event={"ID":"c4bef4f9-656a-4ea0-bef6-c244a2bf382b","Type":"ContainerDied","Data":"5cfed82662998f33c5b844de0d673f671969132c586f5f5d86458108a1ce494e"} Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.233083 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6d5c" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.233414 5108 scope.go:117] "RemoveContainer" containerID="014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.235055 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpbdt" event={"ID":"aa42622c-959b-45af-9871-70b3922add2d","Type":"ContainerStarted","Data":"975a0f1e70710d829e0104e4adbac2174eb4341e4af1fdb60ac183e94b9455f6"} Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.240025 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78rqk\" (UniqueName: \"kubernetes.io/projected/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-kube-api-access-78rqk\") pod \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.240291 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-catalog-content\") pod \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.240388 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-utilities\") pod \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.242671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-utilities" (OuterVolumeSpecName: "utilities") pod "c4bef4f9-656a-4ea0-bef6-c244a2bf382b" (UID: "c4bef4f9-656a-4ea0-bef6-c244a2bf382b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.247367 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-kube-api-access-78rqk" (OuterVolumeSpecName: "kube-api-access-78rqk") pod "c4bef4f9-656a-4ea0-bef6-c244a2bf382b" (UID: "c4bef4f9-656a-4ea0-bef6-c244a2bf382b"). InnerVolumeSpecName "kube-api-access-78rqk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.250587 5108 scope.go:117] "RemoveContainer" containerID="3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.271308 5108 scope.go:117] "RemoveContainer" containerID="50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.290970 5108 scope.go:117] "RemoveContainer" containerID="014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff" Feb 19 00:24:17 crc kubenswrapper[5108]: E0219 00:24:17.291577 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff\": container with ID starting with 014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff not found: ID does not exist" containerID="014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.291644 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff"} err="failed to get container status \"014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff\": rpc error: code = NotFound desc = could not find container \"014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff\": container with ID starting with 014f6a38e6a8d596b56abc9d0635f9c066d02e0f385e95aa06c11e1ad2ebf1ff not found: ID does not exist" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.291676 5108 scope.go:117] "RemoveContainer" containerID="3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8" Feb 19 00:24:17 crc kubenswrapper[5108]: E0219 00:24:17.292147 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8\": container with ID starting with 3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8 not found: ID does not exist" containerID="3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.292188 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8"} err="failed to get container status \"3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8\": rpc error: code = NotFound desc = could not find container \"3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8\": container with ID starting with 3bda6e365b8ce56f0b208a157b66dbdac830fdf8b76b72614c76fc2c92f59fe8 not found: ID does not exist" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.292215 5108 scope.go:117] "RemoveContainer" containerID="50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4" Feb 19 00:24:17 crc kubenswrapper[5108]: E0219 00:24:17.292532 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4\": container with ID starting with 50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4 not found: ID does not exist" containerID="50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.292561 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4"} err="failed to get container status \"50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4\": rpc error: code = NotFound desc = could not find container \"50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4\": container with ID starting with 50de8db16719cf7525c363b4daf50050bf789a07b48221c16669e223f5b7fcb4 not found: ID does not exist" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.340887 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4bef4f9-656a-4ea0-bef6-c244a2bf382b" (UID: "c4bef4f9-656a-4ea0-bef6-c244a2bf382b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.341346 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-catalog-content\") pod \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\" (UID: \"c4bef4f9-656a-4ea0-bef6-c244a2bf382b\") " Feb 19 00:24:17 crc kubenswrapper[5108]: W0219 00:24:17.341511 5108 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c4bef4f9-656a-4ea0-bef6-c244a2bf382b/volumes/kubernetes.io~empty-dir/catalog-content Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.341538 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4bef4f9-656a-4ea0-bef6-c244a2bf382b" (UID: "c4bef4f9-656a-4ea0-bef6-c244a2bf382b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.341761 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-78rqk\" (UniqueName: \"kubernetes.io/projected/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-kube-api-access-78rqk\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.341795 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.341813 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bef4f9-656a-4ea0-bef6-c244a2bf382b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.568238 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6d5c"] Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.574368 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v6d5c"] Feb 19 00:24:17 crc kubenswrapper[5108]: I0219 00:24:17.868059 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" path="/var/lib/kubelet/pods/c4bef4f9-656a-4ea0-bef6-c244a2bf382b/volumes" Feb 19 00:24:18 crc kubenswrapper[5108]: I0219 00:24:18.248331 5108 generic.go:358] "Generic (PLEG): container finished" podID="aa42622c-959b-45af-9871-70b3922add2d" containerID="361aa8dd7425ed918caa63d0e9f5ef34554a90a1be36b0fcd4288b9d3e28ab06" exitCode=0 Feb 19 00:24:18 crc kubenswrapper[5108]: I0219 00:24:18.248447 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpbdt" event={"ID":"aa42622c-959b-45af-9871-70b3922add2d","Type":"ContainerDied","Data":"361aa8dd7425ed918caa63d0e9f5ef34554a90a1be36b0fcd4288b9d3e28ab06"} Feb 19 00:24:20 crc kubenswrapper[5108]: I0219 00:24:20.264618 5108 generic.go:358] "Generic (PLEG): container finished" podID="aa42622c-959b-45af-9871-70b3922add2d" containerID="1d393294bc0e4482750bdfe1b8835b1822488dbe1b4c0376a649fd99a30d856c" exitCode=0 Feb 19 00:24:20 crc kubenswrapper[5108]: I0219 00:24:20.264694 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpbdt" event={"ID":"aa42622c-959b-45af-9871-70b3922add2d","Type":"ContainerDied","Data":"1d393294bc0e4482750bdfe1b8835b1822488dbe1b4c0376a649fd99a30d856c"} Feb 19 00:24:21 crc kubenswrapper[5108]: I0219 00:24:21.273807 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpbdt" event={"ID":"aa42622c-959b-45af-9871-70b3922add2d","Type":"ContainerStarted","Data":"059af87511fdc92a224d9cd94d0a083b2f65d7497c1e1e887f7053d539402998"} Feb 19 00:24:21 crc kubenswrapper[5108]: I0219 00:24:21.295214 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mpbdt" podStartSLOduration=4.235884593 podStartE2EDuration="5.295199703s" podCreationTimestamp="2026-02-19 00:24:16 +0000 UTC" firstStartedPulling="2026-02-19 00:24:18.24967217 +0000 UTC m=+917.216318518" lastFinishedPulling="2026-02-19 00:24:19.30898732 +0000 UTC m=+918.275633628" observedRunningTime="2026-02-19 00:24:21.294597847 +0000 UTC m=+920.261244165" watchObservedRunningTime="2026-02-19 00:24:21.295199703 +0000 UTC m=+920.261846011" Feb 19 00:24:26 crc kubenswrapper[5108]: I0219 00:24:26.450644 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:26 crc kubenswrapper[5108]: I0219 00:24:26.451349 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:26 crc kubenswrapper[5108]: I0219 00:24:26.495716 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:27 crc kubenswrapper[5108]: I0219 00:24:27.376222 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:27 crc kubenswrapper[5108]: I0219 00:24:27.413510 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mpbdt"] Feb 19 00:24:29 crc kubenswrapper[5108]: I0219 00:24:29.334772 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mpbdt" podUID="aa42622c-959b-45af-9871-70b3922add2d" containerName="registry-server" containerID="cri-o://059af87511fdc92a224d9cd94d0a083b2f65d7497c1e1e887f7053d539402998" gracePeriod=2 Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.343113 5108 generic.go:358] "Generic (PLEG): container finished" podID="aa42622c-959b-45af-9871-70b3922add2d" containerID="059af87511fdc92a224d9cd94d0a083b2f65d7497c1e1e887f7053d539402998" exitCode=0 Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.343565 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpbdt" event={"ID":"aa42622c-959b-45af-9871-70b3922add2d","Type":"ContainerDied","Data":"059af87511fdc92a224d9cd94d0a083b2f65d7497c1e1e887f7053d539402998"} Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.394099 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.428344 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-utilities\") pod \"aa42622c-959b-45af-9871-70b3922add2d\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.428593 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fdk8\" (UniqueName: \"kubernetes.io/projected/aa42622c-959b-45af-9871-70b3922add2d-kube-api-access-5fdk8\") pod \"aa42622c-959b-45af-9871-70b3922add2d\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.428755 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-catalog-content\") pod \"aa42622c-959b-45af-9871-70b3922add2d\" (UID: \"aa42622c-959b-45af-9871-70b3922add2d\") " Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.429784 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-utilities" (OuterVolumeSpecName: "utilities") pod "aa42622c-959b-45af-9871-70b3922add2d" (UID: "aa42622c-959b-45af-9871-70b3922add2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.455177 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa42622c-959b-45af-9871-70b3922add2d-kube-api-access-5fdk8" (OuterVolumeSpecName: "kube-api-access-5fdk8") pod "aa42622c-959b-45af-9871-70b3922add2d" (UID: "aa42622c-959b-45af-9871-70b3922add2d"). InnerVolumeSpecName "kube-api-access-5fdk8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.478710 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa42622c-959b-45af-9871-70b3922add2d" (UID: "aa42622c-959b-45af-9871-70b3922add2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.530755 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5fdk8\" (UniqueName: \"kubernetes.io/projected/aa42622c-959b-45af-9871-70b3922add2d-kube-api-access-5fdk8\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.530786 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:30 crc kubenswrapper[5108]: I0219 00:24:30.530795 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa42622c-959b-45af-9871-70b3922add2d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:31 crc kubenswrapper[5108]: I0219 00:24:31.351514 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpbdt" event={"ID":"aa42622c-959b-45af-9871-70b3922add2d","Type":"ContainerDied","Data":"975a0f1e70710d829e0104e4adbac2174eb4341e4af1fdb60ac183e94b9455f6"} Feb 19 00:24:31 crc kubenswrapper[5108]: I0219 00:24:31.351614 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mpbdt" Feb 19 00:24:31 crc kubenswrapper[5108]: I0219 00:24:31.352446 5108 scope.go:117] "RemoveContainer" containerID="059af87511fdc92a224d9cd94d0a083b2f65d7497c1e1e887f7053d539402998" Feb 19 00:24:31 crc kubenswrapper[5108]: I0219 00:24:31.388274 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mpbdt"] Feb 19 00:24:31 crc kubenswrapper[5108]: I0219 00:24:31.399898 5108 scope.go:117] "RemoveContainer" containerID="1d393294bc0e4482750bdfe1b8835b1822488dbe1b4c0376a649fd99a30d856c" Feb 19 00:24:31 crc kubenswrapper[5108]: I0219 00:24:31.401988 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mpbdt"] Feb 19 00:24:31 crc kubenswrapper[5108]: I0219 00:24:31.414795 5108 scope.go:117] "RemoveContainer" containerID="361aa8dd7425ed918caa63d0e9f5ef34554a90a1be36b0fcd4288b9d3e28ab06" Feb 19 00:24:31 crc kubenswrapper[5108]: I0219 00:24:31.858188 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa42622c-959b-45af-9871-70b3922add2d" path="/var/lib/kubelet/pods/aa42622c-959b-45af-9871-70b3922add2d/volumes" Feb 19 00:25:05 crc kubenswrapper[5108]: I0219 00:25:05.579556 5108 generic.go:358] "Generic (PLEG): container finished" podID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerID="40d4923d84c7aa0ae240327bcd9dc14512192d1a97e62d00cd0b26019826cfef" exitCode=0 Feb 19 00:25:05 crc kubenswrapper[5108]: I0219 00:25:05.579632 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"8b1690be-61bb-4599-8c43-bc42c460fae6","Type":"ContainerDied","Data":"40d4923d84c7aa0ae240327bcd9dc14512192d1a97e62d00cd0b26019826cfef"} Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.144648 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.144739 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.939452 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971175 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-pull\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971255 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-root\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971337 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-run\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971377 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfrpp\" (UniqueName: \"kubernetes.io/projected/8b1690be-61bb-4599-8c43-bc42c460fae6-kube-api-access-jfrpp\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971415 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-buildcachedir\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971466 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-build-blob-cache\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971469 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971509 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-buildworkdir\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971545 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-proxy-ca-bundles\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971581 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-system-configs\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971631 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-node-pullsecrets\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971679 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-push\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.971729 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-ca-bundles\") pod \"8b1690be-61bb-4599-8c43-bc42c460fae6\" (UID: \"8b1690be-61bb-4599-8c43-bc42c460fae6\") " Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.972142 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.973008 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.973093 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.973465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.973958 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.974140 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.977673 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.981168 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.981213 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:25:06 crc kubenswrapper[5108]: I0219 00:25:06.981269 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1690be-61bb-4599-8c43-bc42c460fae6-kube-api-access-jfrpp" (OuterVolumeSpecName: "kube-api-access-jfrpp") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "kube-api-access-jfrpp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073583 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073630 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073643 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/8b1690be-61bb-4599-8c43-bc42c460fae6-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073655 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073666 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jfrpp\" (UniqueName: \"kubernetes.io/projected/8b1690be-61bb-4599-8c43-bc42c460fae6-kube-api-access-jfrpp\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073678 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073690 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073701 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b1690be-61bb-4599-8c43-bc42c460fae6-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.073711 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b1690be-61bb-4599-8c43-bc42c460fae6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.158216 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.175267 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.596341 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.596338 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"8b1690be-61bb-4599-8c43-bc42c460fae6","Type":"ContainerDied","Data":"fd1c117e57dd619fde9d2d82d3dc63fbf8762f1639fe5585f82e5510779a1882"} Feb 19 00:25:07 crc kubenswrapper[5108]: I0219 00:25:07.596487 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd1c117e57dd619fde9d2d82d3dc63fbf8762f1639fe5585f82e5510779a1882" Feb 19 00:25:09 crc kubenswrapper[5108]: I0219 00:25:09.230739 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8b1690be-61bb-4599-8c43-bc42c460fae6" (UID: "8b1690be-61bb-4599-8c43-bc42c460fae6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:09 crc kubenswrapper[5108]: I0219 00:25:09.310300 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b1690be-61bb-4599-8c43-bc42c460fae6-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.695659 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697044 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerName="git-clone" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697071 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerName="git-clone" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697094 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerName="extract-utilities" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697105 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerName="extract-utilities" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697119 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerName="docker-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697131 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerName="docker-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697148 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerName="extract-content" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697158 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerName="extract-content" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697195 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa42622c-959b-45af-9871-70b3922add2d" containerName="extract-utilities" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697205 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa42622c-959b-45af-9871-70b3922add2d" containerName="extract-utilities" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697219 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerName="registry-server" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697229 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerName="registry-server" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697259 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa42622c-959b-45af-9871-70b3922add2d" containerName="registry-server" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697270 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa42622c-959b-45af-9871-70b3922add2d" containerName="registry-server" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697287 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa42622c-959b-45af-9871-70b3922add2d" containerName="extract-content" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697298 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa42622c-959b-45af-9871-70b3922add2d" containerName="extract-content" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697389 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerName="manage-dockerfile" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697421 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerName="manage-dockerfile" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697657 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa42622c-959b-45af-9871-70b3922add2d" containerName="registry-server" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697675 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c4bef4f9-656a-4ea0-bef6-c244a2bf382b" containerName="registry-server" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.697692 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b1690be-61bb-4599-8c43-bc42c460fae6" containerName="docker-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.703786 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.707217 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.707239 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.707892 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.708276 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.713593 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.750704 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-system-configs\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.750756 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.750870 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-run\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.750997 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.751034 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-root\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.751076 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h4ht\" (UniqueName: \"kubernetes.io/projected/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-kube-api-access-5h4ht\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.751128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildcachedir\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.751155 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-push\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.751221 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.751290 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-pull\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.751315 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildworkdir\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.751346 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.852897 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-pull\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.852960 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildworkdir\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.852993 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853171 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853199 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-system-configs\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853264 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853304 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-run\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853346 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853376 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-root\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853453 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5h4ht\" (UniqueName: \"kubernetes.io/projected/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-kube-api-access-5h4ht\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853490 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildcachedir\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853516 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-push\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853552 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853562 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildworkdir\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853698 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildcachedir\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853891 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853952 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-system-configs\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.853993 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-root\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.854335 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-run\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.854648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.855354 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.862073 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-pull\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.862280 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-push\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:11 crc kubenswrapper[5108]: I0219 00:25:11.871277 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h4ht\" (UniqueName: \"kubernetes.io/projected/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-kube-api-access-5h4ht\") pod \"sg-core-1-build\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:25:12 crc kubenswrapper[5108]: I0219 00:25:12.028410 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 19 00:25:12 crc kubenswrapper[5108]: I0219 00:25:12.314365 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:25:12 crc kubenswrapper[5108]: I0219 00:25:12.324470 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:25:12 crc kubenswrapper[5108]: I0219 00:25:12.639295 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4","Type":"ContainerStarted","Data":"670448bd0260e2b7c43dfb814c3f1de7d5f681df2a182936dcc735afda90bac8"} Feb 19 00:25:13 crc kubenswrapper[5108]: I0219 00:25:13.650456 5108 generic.go:358] "Generic (PLEG): container finished" podID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" containerID="c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4" exitCode=0 Feb 19 00:25:13 crc kubenswrapper[5108]: I0219 00:25:13.650524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4","Type":"ContainerDied","Data":"c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4"} Feb 19 00:25:14 crc kubenswrapper[5108]: I0219 00:25:14.661594 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4","Type":"ContainerStarted","Data":"38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04"} Feb 19 00:25:14 crc kubenswrapper[5108]: I0219 00:25:14.694080 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.694057193 podStartE2EDuration="3.694057193s" podCreationTimestamp="2026-02-19 00:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:25:14.68768442 +0000 UTC m=+973.654330728" watchObservedRunningTime="2026-02-19 00:25:14.694057193 +0000 UTC m=+973.660703511" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.061195 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.062128 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" containerName="docker-build" containerID="cri-o://38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04" gracePeriod=30 Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.502612 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_34027b1d-cb24-4177-8f88-e3b5d5ac5bd4/docker-build/0.log" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.503528 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.535909 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildcachedir\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536026 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-root\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536107 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-node-pullsecrets\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536143 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-run\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536194 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-ca-bundles\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536212 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536255 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h4ht\" (UniqueName: \"kubernetes.io/projected/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-kube-api-access-5h4ht\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536299 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-system-configs\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536336 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildworkdir\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536371 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-pull\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536743 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.536775 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.537588 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.537690 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.537712 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.537824 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.542857 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-kube-api-access-5h4ht" (OuterVolumeSpecName: "kube-api-access-5h4ht") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "kube-api-access-5h4ht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.546087 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.637862 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-proxy-ca-bundles\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.637961 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-push\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.638037 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-blob-cache\") pod \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\" (UID: \"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4\") " Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.638693 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.638726 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.638742 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5h4ht\" (UniqueName: \"kubernetes.io/projected/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-kube-api-access-5h4ht\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.638758 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.638774 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.638791 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.639209 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.644535 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.704599 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.737981 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" (UID: "34027b1d-cb24-4177-8f88-e3b5d5ac5bd4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.738064 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_34027b1d-cb24-4177-8f88-e3b5d5ac5bd4/docker-build/0.log" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.738377 5108 generic.go:358] "Generic (PLEG): container finished" podID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" containerID="38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04" exitCode=1 Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.738413 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4","Type":"ContainerDied","Data":"38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04"} Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.738436 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"34027b1d-cb24-4177-8f88-e3b5d5ac5bd4","Type":"ContainerDied","Data":"670448bd0260e2b7c43dfb814c3f1de7d5f681df2a182936dcc735afda90bac8"} Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.738451 5108 scope.go:117] "RemoveContainer" containerID="38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.738489 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.740217 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.740236 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.740245 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.740254 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.771969 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.777921 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.793024 5108 scope.go:117] "RemoveContainer" containerID="c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.874152 5108 scope.go:117] "RemoveContainer" containerID="38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04" Feb 19 00:25:22 crc kubenswrapper[5108]: E0219 00:25:22.875626 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04\": container with ID starting with 38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04 not found: ID does not exist" containerID="38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.875661 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04"} err="failed to get container status \"38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04\": rpc error: code = NotFound desc = could not find container \"38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04\": container with ID starting with 38bef8f20a872d3b12e14c4a4a4f28679b095c510629ec9450e8d528ab2afa04 not found: ID does not exist" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.875682 5108 scope.go:117] "RemoveContainer" containerID="c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4" Feb 19 00:25:22 crc kubenswrapper[5108]: E0219 00:25:22.876322 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4\": container with ID starting with c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4 not found: ID does not exist" containerID="c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4" Feb 19 00:25:22 crc kubenswrapper[5108]: I0219 00:25:22.876394 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4"} err="failed to get container status \"c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4\": rpc error: code = NotFound desc = could not find container \"c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4\": container with ID starting with c99c33bf1515f130008d8d8659eebce4113470c9ee49603f3c9051802435adc4 not found: ID does not exist" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.761287 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.762175 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" containerName="manage-dockerfile" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.762194 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" containerName="manage-dockerfile" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.762231 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" containerName="docker-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.762238 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" containerName="docker-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.762388 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" containerName="docker-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.770491 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.772909 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.775876 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.777721 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.780091 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.791989 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854289 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854553 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854582 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854614 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854660 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854699 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854723 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854747 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-push\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854772 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854795 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbp4\" (UniqueName: \"kubernetes.io/projected/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-kube-api-access-qzbp4\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.854966 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-pull\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.855128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.859603 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34027b1d-cb24-4177-8f88-e3b5d5ac5bd4" path="/var/lib/kubelet/pods/34027b1d-cb24-4177-8f88-e3b5d5ac5bd4/volumes" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957254 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957333 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957357 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957394 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957739 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957788 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957823 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957877 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957888 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.958038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-push\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.958070 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.958109 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.958177 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzbp4\" (UniqueName: \"kubernetes.io/projected/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-kube-api-access-qzbp4\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.958257 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-pull\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.958315 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.958562 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.957949 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.959554 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.960131 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.961137 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.966162 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-pull\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.966436 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-push\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:23 crc kubenswrapper[5108]: I0219 00:25:23.979633 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzbp4\" (UniqueName: \"kubernetes.io/projected/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-kube-api-access-qzbp4\") pod \"sg-core-2-build\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:25:24 crc kubenswrapper[5108]: I0219 00:25:24.101030 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 19 00:25:24 crc kubenswrapper[5108]: I0219 00:25:24.344820 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 19 00:25:24 crc kubenswrapper[5108]: I0219 00:25:24.757584 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07","Type":"ContainerStarted","Data":"91849f2bc4911a9cdac279a5d847a92284fede2ba4c50756c260b3190c369520"} Feb 19 00:25:24 crc kubenswrapper[5108]: I0219 00:25:24.757623 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07","Type":"ContainerStarted","Data":"505bf2c251b128be0b9e3daba776e2baed8eef7284a04bdabff17cbafca83b5b"} Feb 19 00:25:25 crc kubenswrapper[5108]: I0219 00:25:25.766785 5108 generic.go:358] "Generic (PLEG): container finished" podID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerID="91849f2bc4911a9cdac279a5d847a92284fede2ba4c50756c260b3190c369520" exitCode=0 Feb 19 00:25:25 crc kubenswrapper[5108]: I0219 00:25:25.766876 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07","Type":"ContainerDied","Data":"91849f2bc4911a9cdac279a5d847a92284fede2ba4c50756c260b3190c369520"} Feb 19 00:25:26 crc kubenswrapper[5108]: I0219 00:25:26.776498 5108 generic.go:358] "Generic (PLEG): container finished" podID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerID="d2d19a584de485067e4211d9bbf8eaf3f97a3004f9dc7fa6923d02a8a994b3b9" exitCode=0 Feb 19 00:25:26 crc kubenswrapper[5108]: I0219 00:25:26.776559 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07","Type":"ContainerDied","Data":"d2d19a584de485067e4211d9bbf8eaf3f97a3004f9dc7fa6923d02a8a994b3b9"} Feb 19 00:25:26 crc kubenswrapper[5108]: I0219 00:25:26.805738 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_6dd6bbe6-5ab1-4d00-8677-a8fee159ee07/manage-dockerfile/0.log" Feb 19 00:25:27 crc kubenswrapper[5108]: I0219 00:25:27.789711 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07","Type":"ContainerStarted","Data":"05c169bec9247df2d7f615216fdff03d126fc1e7b0747c627e81d9f6cc3aaacb"} Feb 19 00:25:27 crc kubenswrapper[5108]: I0219 00:25:27.834406 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.83438806 podStartE2EDuration="4.83438806s" podCreationTimestamp="2026-02-19 00:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:25:27.827092402 +0000 UTC m=+986.793738760" watchObservedRunningTime="2026-02-19 00:25:27.83438806 +0000 UTC m=+986.801034378" Feb 19 00:25:36 crc kubenswrapper[5108]: I0219 00:25:36.144515 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:25:36 crc kubenswrapper[5108]: I0219 00:25:36.145090 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.160023 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524346-ffh9q"] Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.682364 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524346-ffh9q"] Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.682637 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524346-ffh9q" Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.686258 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.686996 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.687436 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.711582 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68f9c\" (UniqueName: \"kubernetes.io/projected/1b6c67fa-13e6-4a1c-b520-6dbc388c1d85-kube-api-access-68f9c\") pod \"auto-csr-approver-29524346-ffh9q\" (UID: \"1b6c67fa-13e6-4a1c-b520-6dbc388c1d85\") " pod="openshift-infra/auto-csr-approver-29524346-ffh9q" Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.834451 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68f9c\" (UniqueName: \"kubernetes.io/projected/1b6c67fa-13e6-4a1c-b520-6dbc388c1d85-kube-api-access-68f9c\") pod \"auto-csr-approver-29524346-ffh9q\" (UID: \"1b6c67fa-13e6-4a1c-b520-6dbc388c1d85\") " pod="openshift-infra/auto-csr-approver-29524346-ffh9q" Feb 19 00:26:00 crc kubenswrapper[5108]: I0219 00:26:00.854050 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-68f9c\" (UniqueName: \"kubernetes.io/projected/1b6c67fa-13e6-4a1c-b520-6dbc388c1d85-kube-api-access-68f9c\") pod \"auto-csr-approver-29524346-ffh9q\" (UID: \"1b6c67fa-13e6-4a1c-b520-6dbc388c1d85\") " pod="openshift-infra/auto-csr-approver-29524346-ffh9q" Feb 19 00:26:01 crc kubenswrapper[5108]: I0219 00:26:01.007369 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524346-ffh9q" Feb 19 00:26:01 crc kubenswrapper[5108]: I0219 00:26:01.301254 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524346-ffh9q"] Feb 19 00:26:02 crc kubenswrapper[5108]: I0219 00:26:02.062325 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524346-ffh9q" event={"ID":"1b6c67fa-13e6-4a1c-b520-6dbc388c1d85","Type":"ContainerStarted","Data":"d6a90b404ae9930f548708ef880b6a40be241bc452147869fa95d49c8266ade9"} Feb 19 00:26:03 crc kubenswrapper[5108]: I0219 00:26:03.069527 5108 generic.go:358] "Generic (PLEG): container finished" podID="1b6c67fa-13e6-4a1c-b520-6dbc388c1d85" containerID="735be55f6729731671c9ee4037391307f935b575efd0b13cf606df14a0c6ca78" exitCode=0 Feb 19 00:26:03 crc kubenswrapper[5108]: I0219 00:26:03.069636 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524346-ffh9q" event={"ID":"1b6c67fa-13e6-4a1c-b520-6dbc388c1d85","Type":"ContainerDied","Data":"735be55f6729731671c9ee4037391307f935b575efd0b13cf606df14a0c6ca78"} Feb 19 00:26:04 crc kubenswrapper[5108]: I0219 00:26:04.412902 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524346-ffh9q" Feb 19 00:26:04 crc kubenswrapper[5108]: I0219 00:26:04.483109 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68f9c\" (UniqueName: \"kubernetes.io/projected/1b6c67fa-13e6-4a1c-b520-6dbc388c1d85-kube-api-access-68f9c\") pod \"1b6c67fa-13e6-4a1c-b520-6dbc388c1d85\" (UID: \"1b6c67fa-13e6-4a1c-b520-6dbc388c1d85\") " Feb 19 00:26:04 crc kubenswrapper[5108]: I0219 00:26:04.490693 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6c67fa-13e6-4a1c-b520-6dbc388c1d85-kube-api-access-68f9c" (OuterVolumeSpecName: "kube-api-access-68f9c") pod "1b6c67fa-13e6-4a1c-b520-6dbc388c1d85" (UID: "1b6c67fa-13e6-4a1c-b520-6dbc388c1d85"). InnerVolumeSpecName "kube-api-access-68f9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:26:04 crc kubenswrapper[5108]: I0219 00:26:04.585683 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68f9c\" (UniqueName: \"kubernetes.io/projected/1b6c67fa-13e6-4a1c-b520-6dbc388c1d85-kube-api-access-68f9c\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:05 crc kubenswrapper[5108]: I0219 00:26:05.084857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524346-ffh9q" event={"ID":"1b6c67fa-13e6-4a1c-b520-6dbc388c1d85","Type":"ContainerDied","Data":"d6a90b404ae9930f548708ef880b6a40be241bc452147869fa95d49c8266ade9"} Feb 19 00:26:05 crc kubenswrapper[5108]: I0219 00:26:05.084902 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524346-ffh9q" Feb 19 00:26:05 crc kubenswrapper[5108]: I0219 00:26:05.084911 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6a90b404ae9930f548708ef880b6a40be241bc452147869fa95d49c8266ade9" Feb 19 00:26:05 crc kubenswrapper[5108]: I0219 00:26:05.478649 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524340-sbl4s"] Feb 19 00:26:05 crc kubenswrapper[5108]: I0219 00:26:05.485229 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524340-sbl4s"] Feb 19 00:26:05 crc kubenswrapper[5108]: I0219 00:26:05.857902 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb" path="/var/lib/kubelet/pods/ed4dd8c7-6f9e-4b16-94b9-9a4e78cb6edb/volumes" Feb 19 00:26:06 crc kubenswrapper[5108]: I0219 00:26:06.145535 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:26:06 crc kubenswrapper[5108]: I0219 00:26:06.145630 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:26:06 crc kubenswrapper[5108]: I0219 00:26:06.145694 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:26:06 crc kubenswrapper[5108]: I0219 00:26:06.146374 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cfa792aa453a077c6bdecc7c8970848374d26dd08be250811638f0ac93b7f02"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:26:06 crc kubenswrapper[5108]: I0219 00:26:06.146452 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://4cfa792aa453a077c6bdecc7c8970848374d26dd08be250811638f0ac93b7f02" gracePeriod=600 Feb 19 00:26:07 crc kubenswrapper[5108]: I0219 00:26:07.103541 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="4cfa792aa453a077c6bdecc7c8970848374d26dd08be250811638f0ac93b7f02" exitCode=0 Feb 19 00:26:07 crc kubenswrapper[5108]: I0219 00:26:07.104339 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"4cfa792aa453a077c6bdecc7c8970848374d26dd08be250811638f0ac93b7f02"} Feb 19 00:26:07 crc kubenswrapper[5108]: I0219 00:26:07.104379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"2d81d337fd772fc475aa1e34f1691df7c5878b03eaa535cbcff5e87cd3b6dc50"} Feb 19 00:26:07 crc kubenswrapper[5108]: I0219 00:26:07.104407 5108 scope.go:117] "RemoveContainer" containerID="093eaa062e1910cafbd3717e66d83cae43e8cdac075555e5e894e1a4f83c28e4" Feb 19 00:26:09 crc kubenswrapper[5108]: I0219 00:26:09.705446 5108 scope.go:117] "RemoveContainer" containerID="0cf214eca2fd73c9a451b5a11faec6b3b3666a41216232d0484449395b5fafa4" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.038755 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hbwlq"] Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.039864 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b6c67fa-13e6-4a1c-b520-6dbc388c1d85" containerName="oc" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.039884 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6c67fa-13e6-4a1c-b520-6dbc388c1d85" containerName="oc" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.040023 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1b6c67fa-13e6-4a1c-b520-6dbc388c1d85" containerName="oc" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.058644 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbwlq"] Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.058770 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.183726 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-catalog-content\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.184068 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbq6\" (UniqueName: \"kubernetes.io/projected/42f9a9a4-b387-4505-94fe-d27d4067c527-kube-api-access-4pbq6\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.184104 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-utilities\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.285294 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4pbq6\" (UniqueName: \"kubernetes.io/projected/42f9a9a4-b387-4505-94fe-d27d4067c527-kube-api-access-4pbq6\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.285358 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-utilities\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.285425 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-catalog-content\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.286003 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-catalog-content\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.286225 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-utilities\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.320972 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pbq6\" (UniqueName: \"kubernetes.io/projected/42f9a9a4-b387-4505-94fe-d27d4067c527-kube-api-access-4pbq6\") pod \"certified-operators-hbwlq\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.379754 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:11 crc kubenswrapper[5108]: I0219 00:26:11.622409 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbwlq"] Feb 19 00:26:12 crc kubenswrapper[5108]: I0219 00:26:12.142900 5108 generic.go:358] "Generic (PLEG): container finished" podID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerID="863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56" exitCode=0 Feb 19 00:26:12 crc kubenswrapper[5108]: I0219 00:26:12.142975 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbwlq" event={"ID":"42f9a9a4-b387-4505-94fe-d27d4067c527","Type":"ContainerDied","Data":"863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56"} Feb 19 00:26:12 crc kubenswrapper[5108]: I0219 00:26:12.143032 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbwlq" event={"ID":"42f9a9a4-b387-4505-94fe-d27d4067c527","Type":"ContainerStarted","Data":"b040d0f1b9b9dcd94c5fea550cb67488b9fd4ddbb17d53bdc34a10fc3be12976"} Feb 19 00:26:13 crc kubenswrapper[5108]: I0219 00:26:13.151081 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbwlq" event={"ID":"42f9a9a4-b387-4505-94fe-d27d4067c527","Type":"ContainerStarted","Data":"4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf"} Feb 19 00:26:14 crc kubenswrapper[5108]: I0219 00:26:14.159556 5108 generic.go:358] "Generic (PLEG): container finished" podID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerID="4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf" exitCode=0 Feb 19 00:26:14 crc kubenswrapper[5108]: I0219 00:26:14.159628 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbwlq" event={"ID":"42f9a9a4-b387-4505-94fe-d27d4067c527","Type":"ContainerDied","Data":"4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf"} Feb 19 00:26:15 crc kubenswrapper[5108]: I0219 00:26:15.171085 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbwlq" event={"ID":"42f9a9a4-b387-4505-94fe-d27d4067c527","Type":"ContainerStarted","Data":"795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe"} Feb 19 00:26:15 crc kubenswrapper[5108]: I0219 00:26:15.191170 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hbwlq" podStartSLOduration=3.43886837 podStartE2EDuration="4.19115184s" podCreationTimestamp="2026-02-19 00:26:11 +0000 UTC" firstStartedPulling="2026-02-19 00:26:12.14385311 +0000 UTC m=+1031.110499428" lastFinishedPulling="2026-02-19 00:26:12.89613659 +0000 UTC m=+1031.862782898" observedRunningTime="2026-02-19 00:26:15.188141419 +0000 UTC m=+1034.154787737" watchObservedRunningTime="2026-02-19 00:26:15.19115184 +0000 UTC m=+1034.157798148" Feb 19 00:26:21 crc kubenswrapper[5108]: I0219 00:26:21.381137 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:21 crc kubenswrapper[5108]: I0219 00:26:21.381815 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:21 crc kubenswrapper[5108]: I0219 00:26:21.428357 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:22 crc kubenswrapper[5108]: I0219 00:26:22.263922 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:24 crc kubenswrapper[5108]: I0219 00:26:24.832829 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbwlq"] Feb 19 00:26:24 crc kubenswrapper[5108]: I0219 00:26:24.833759 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hbwlq" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerName="registry-server" containerID="cri-o://795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe" gracePeriod=2 Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.181019 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.188459 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-utilities\") pod \"42f9a9a4-b387-4505-94fe-d27d4067c527\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.188597 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-catalog-content\") pod \"42f9a9a4-b387-4505-94fe-d27d4067c527\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.188709 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pbq6\" (UniqueName: \"kubernetes.io/projected/42f9a9a4-b387-4505-94fe-d27d4067c527-kube-api-access-4pbq6\") pod \"42f9a9a4-b387-4505-94fe-d27d4067c527\" (UID: \"42f9a9a4-b387-4505-94fe-d27d4067c527\") " Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.189877 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-utilities" (OuterVolumeSpecName: "utilities") pod "42f9a9a4-b387-4505-94fe-d27d4067c527" (UID: "42f9a9a4-b387-4505-94fe-d27d4067c527"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.196272 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f9a9a4-b387-4505-94fe-d27d4067c527-kube-api-access-4pbq6" (OuterVolumeSpecName: "kube-api-access-4pbq6") pod "42f9a9a4-b387-4505-94fe-d27d4067c527" (UID: "42f9a9a4-b387-4505-94fe-d27d4067c527"). InnerVolumeSpecName "kube-api-access-4pbq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.219226 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42f9a9a4-b387-4505-94fe-d27d4067c527" (UID: "42f9a9a4-b387-4505-94fe-d27d4067c527"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.239269 5108 generic.go:358] "Generic (PLEG): container finished" podID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerID="795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe" exitCode=0 Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.239327 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbwlq" event={"ID":"42f9a9a4-b387-4505-94fe-d27d4067c527","Type":"ContainerDied","Data":"795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe"} Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.239367 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbwlq" event={"ID":"42f9a9a4-b387-4505-94fe-d27d4067c527","Type":"ContainerDied","Data":"b040d0f1b9b9dcd94c5fea550cb67488b9fd4ddbb17d53bdc34a10fc3be12976"} Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.239387 5108 scope.go:117] "RemoveContainer" containerID="795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.239425 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbwlq" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.275891 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbwlq"] Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.278041 5108 scope.go:117] "RemoveContainer" containerID="4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.281821 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hbwlq"] Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.290120 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.290158 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4pbq6\" (UniqueName: \"kubernetes.io/projected/42f9a9a4-b387-4505-94fe-d27d4067c527-kube-api-access-4pbq6\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.290169 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f9a9a4-b387-4505-94fe-d27d4067c527-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.299070 5108 scope.go:117] "RemoveContainer" containerID="863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.314531 5108 scope.go:117] "RemoveContainer" containerID="795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe" Feb 19 00:26:25 crc kubenswrapper[5108]: E0219 00:26:25.315027 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe\": container with ID starting with 795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe not found: ID does not exist" containerID="795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.315079 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe"} err="failed to get container status \"795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe\": rpc error: code = NotFound desc = could not find container \"795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe\": container with ID starting with 795499d0de25b290f39c98b6006253762319f4554f0cef8fc431ff4e0da939fe not found: ID does not exist" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.315101 5108 scope.go:117] "RemoveContainer" containerID="4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf" Feb 19 00:26:25 crc kubenswrapper[5108]: E0219 00:26:25.315367 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf\": container with ID starting with 4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf not found: ID does not exist" containerID="4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.315422 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf"} err="failed to get container status \"4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf\": rpc error: code = NotFound desc = could not find container \"4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf\": container with ID starting with 4014b8fe68e83ae543694cb0c06fb4421e237a0831be9b93acec9f5e368be7cf not found: ID does not exist" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.315455 5108 scope.go:117] "RemoveContainer" containerID="863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56" Feb 19 00:26:25 crc kubenswrapper[5108]: E0219 00:26:25.315710 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56\": container with ID starting with 863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56 not found: ID does not exist" containerID="863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.315735 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56"} err="failed to get container status \"863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56\": rpc error: code = NotFound desc = could not find container \"863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56\": container with ID starting with 863e0acf8b1801d33eda7d095561793e2121246d18419a5943453d0177aa7b56 not found: ID does not exist" Feb 19 00:26:25 crc kubenswrapper[5108]: I0219 00:26:25.856780 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" path="/var/lib/kubelet/pods/42f9a9a4-b387-4505-94fe-d27d4067c527/volumes" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.137459 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524348-4tnnb"] Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.139833 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerName="registry-server" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.139871 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerName="registry-server" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.139888 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerName="extract-utilities" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.139895 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerName="extract-utilities" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.139921 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerName="extract-content" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.139929 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerName="extract-content" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.140063 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="42f9a9a4-b387-4505-94fe-d27d4067c527" containerName="registry-server" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.220022 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524348-4tnnb"] Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.220142 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524348-4tnnb" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.222394 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.224496 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.226096 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.399407 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twkcr\" (UniqueName: \"kubernetes.io/projected/3dc4ba1e-79ea-4b20-af59-e6772d445069-kube-api-access-twkcr\") pod \"auto-csr-approver-29524348-4tnnb\" (UID: \"3dc4ba1e-79ea-4b20-af59-e6772d445069\") " pod="openshift-infra/auto-csr-approver-29524348-4tnnb" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.501031 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-twkcr\" (UniqueName: \"kubernetes.io/projected/3dc4ba1e-79ea-4b20-af59-e6772d445069-kube-api-access-twkcr\") pod \"auto-csr-approver-29524348-4tnnb\" (UID: \"3dc4ba1e-79ea-4b20-af59-e6772d445069\") " pod="openshift-infra/auto-csr-approver-29524348-4tnnb" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.529423 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-twkcr\" (UniqueName: \"kubernetes.io/projected/3dc4ba1e-79ea-4b20-af59-e6772d445069-kube-api-access-twkcr\") pod \"auto-csr-approver-29524348-4tnnb\" (UID: \"3dc4ba1e-79ea-4b20-af59-e6772d445069\") " pod="openshift-infra/auto-csr-approver-29524348-4tnnb" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.538758 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524348-4tnnb" Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.823998 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524348-4tnnb"] Feb 19 00:28:00 crc kubenswrapper[5108]: I0219 00:28:00.915353 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524348-4tnnb" event={"ID":"3dc4ba1e-79ea-4b20-af59-e6772d445069","Type":"ContainerStarted","Data":"463404f277d5e7aed08a36a58b04cc8910eac6fba2b1d1a7826f5df197a27ea1"} Feb 19 00:28:02 crc kubenswrapper[5108]: I0219 00:28:02.942279 5108 generic.go:358] "Generic (PLEG): container finished" podID="3dc4ba1e-79ea-4b20-af59-e6772d445069" containerID="bc850cec2aa38b823413308977cca0d31a88d037ac4227bd23982a3cc26fd299" exitCode=0 Feb 19 00:28:02 crc kubenswrapper[5108]: I0219 00:28:02.942360 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524348-4tnnb" event={"ID":"3dc4ba1e-79ea-4b20-af59-e6772d445069","Type":"ContainerDied","Data":"bc850cec2aa38b823413308977cca0d31a88d037ac4227bd23982a3cc26fd299"} Feb 19 00:28:04 crc kubenswrapper[5108]: I0219 00:28:04.211743 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524348-4tnnb" Feb 19 00:28:04 crc kubenswrapper[5108]: I0219 00:28:04.256545 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twkcr\" (UniqueName: \"kubernetes.io/projected/3dc4ba1e-79ea-4b20-af59-e6772d445069-kube-api-access-twkcr\") pod \"3dc4ba1e-79ea-4b20-af59-e6772d445069\" (UID: \"3dc4ba1e-79ea-4b20-af59-e6772d445069\") " Feb 19 00:28:04 crc kubenswrapper[5108]: I0219 00:28:04.261754 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc4ba1e-79ea-4b20-af59-e6772d445069-kube-api-access-twkcr" (OuterVolumeSpecName: "kube-api-access-twkcr") pod "3dc4ba1e-79ea-4b20-af59-e6772d445069" (UID: "3dc4ba1e-79ea-4b20-af59-e6772d445069"). InnerVolumeSpecName "kube-api-access-twkcr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:28:04 crc kubenswrapper[5108]: I0219 00:28:04.358207 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twkcr\" (UniqueName: \"kubernetes.io/projected/3dc4ba1e-79ea-4b20-af59-e6772d445069-kube-api-access-twkcr\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:04 crc kubenswrapper[5108]: I0219 00:28:04.957894 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524348-4tnnb" Feb 19 00:28:04 crc kubenswrapper[5108]: I0219 00:28:04.957909 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524348-4tnnb" event={"ID":"3dc4ba1e-79ea-4b20-af59-e6772d445069","Type":"ContainerDied","Data":"463404f277d5e7aed08a36a58b04cc8910eac6fba2b1d1a7826f5df197a27ea1"} Feb 19 00:28:04 crc kubenswrapper[5108]: I0219 00:28:04.957961 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463404f277d5e7aed08a36a58b04cc8910eac6fba2b1d1a7826f5df197a27ea1" Feb 19 00:28:05 crc kubenswrapper[5108]: I0219 00:28:05.274781 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524342-6qjdp"] Feb 19 00:28:05 crc kubenswrapper[5108]: I0219 00:28:05.281274 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524342-6qjdp"] Feb 19 00:28:05 crc kubenswrapper[5108]: I0219 00:28:05.856100 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f240172-d316-44ed-abb7-0ecc623b7967" path="/var/lib/kubelet/pods/1f240172-d316-44ed-abb7-0ecc623b7967/volumes" Feb 19 00:28:06 crc kubenswrapper[5108]: I0219 00:28:06.145717 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:28:06 crc kubenswrapper[5108]: I0219 00:28:06.146291 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:28:09 crc kubenswrapper[5108]: I0219 00:28:09.885548 5108 scope.go:117] "RemoveContainer" containerID="56988224e9acaf6bbd3324c3ccc10d5ccbeb291f60c19546eea6abfeb1995016" Feb 19 00:28:36 crc kubenswrapper[5108]: I0219 00:28:36.145315 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:28:36 crc kubenswrapper[5108]: I0219 00:28:36.146189 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:28:41 crc kubenswrapper[5108]: I0219 00:28:41.248081 5108 generic.go:358] "Generic (PLEG): container finished" podID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerID="05c169bec9247df2d7f615216fdff03d126fc1e7b0747c627e81d9f6cc3aaacb" exitCode=0 Feb 19 00:28:41 crc kubenswrapper[5108]: I0219 00:28:41.248151 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07","Type":"ContainerDied","Data":"05c169bec9247df2d7f615216fdff03d126fc1e7b0747c627e81d9f6cc3aaacb"} Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.537881 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.608305 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzbp4\" (UniqueName: \"kubernetes.io/projected/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-kube-api-access-qzbp4\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.608369 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-node-pullsecrets\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.608464 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-root\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.608497 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-pull\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.608515 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.608558 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-blob-cache\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.608735 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-ca-bundles\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609060 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-push\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609191 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildcachedir\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609244 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-system-configs\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609314 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-proxy-ca-bundles\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609334 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609419 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-run\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609481 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildworkdir\") pod \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\" (UID: \"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07\") " Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609807 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.609833 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.610305 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.610396 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.612039 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.614026 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.614616 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-kube-api-access-qzbp4" (OuterVolumeSpecName: "kube-api-access-qzbp4") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "kube-api-access-qzbp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.614649 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.616250 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.626711 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.711178 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.711220 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.711232 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.711244 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.711255 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qzbp4\" (UniqueName: \"kubernetes.io/projected/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-kube-api-access-qzbp4\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.711267 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.711279 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.711292 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:42 crc kubenswrapper[5108]: I0219 00:28:42.976847 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:43 crc kubenswrapper[5108]: I0219 00:28:43.014834 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:43 crc kubenswrapper[5108]: I0219 00:28:43.270495 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 19 00:28:43 crc kubenswrapper[5108]: I0219 00:28:43.270640 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6dd6bbe6-5ab1-4d00-8677-a8fee159ee07","Type":"ContainerDied","Data":"505bf2c251b128be0b9e3daba776e2baed8eef7284a04bdabff17cbafca83b5b"} Feb 19 00:28:43 crc kubenswrapper[5108]: I0219 00:28:43.270691 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="505bf2c251b128be0b9e3daba776e2baed8eef7284a04bdabff17cbafca83b5b" Feb 19 00:28:45 crc kubenswrapper[5108]: I0219 00:28:45.952411 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" (UID: "6dd6bbe6-5ab1-4d00-8677-a8fee159ee07"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:45 crc kubenswrapper[5108]: I0219 00:28:45.971819 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6dd6bbe6-5ab1-4d00-8677-a8fee159ee07-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.883595 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.884696 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerName="git-clone" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.884730 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerName="git-clone" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.884765 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerName="docker-build" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.884778 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerName="docker-build" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.884812 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerName="manage-dockerfile" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.884825 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerName="manage-dockerfile" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.884861 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3dc4ba1e-79ea-4b20-af59-e6772d445069" containerName="oc" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.884875 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc4ba1e-79ea-4b20-af59-e6772d445069" containerName="oc" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.885096 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3dc4ba1e-79ea-4b20-af59-e6772d445069" containerName="oc" Feb 19 00:28:46 crc kubenswrapper[5108]: I0219 00:28:46.885126 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="6dd6bbe6-5ab1-4d00-8677-a8fee159ee07" containerName="docker-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.017853 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.017981 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.020042 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.021483 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.021530 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.021761 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.189928 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-pull\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190028 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-push\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190079 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfgl\" (UniqueName: \"kubernetes.io/projected/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-kube-api-access-lwfgl\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190123 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190220 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190269 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190343 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190397 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190440 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190461 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.190495 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.292483 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfgl\" (UniqueName: \"kubernetes.io/projected/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-kube-api-access-lwfgl\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.292552 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.292620 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.292690 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.292783 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.292837 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.292900 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.292992 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.293093 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.293193 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-pull\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.293254 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-push\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.293364 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.293588 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.293716 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.294005 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.294331 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.294410 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.294661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.294768 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.295244 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.296049 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.302475 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-pull\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.303355 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-push\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.322878 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfgl\" (UniqueName: \"kubernetes.io/projected/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-kube-api-access-lwfgl\") pod \"sg-bridge-1-build\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.346901 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:47 crc kubenswrapper[5108]: I0219 00:28:47.606187 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:28:48 crc kubenswrapper[5108]: I0219 00:28:48.322777 5108 generic.go:358] "Generic (PLEG): container finished" podID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" containerID="a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c" exitCode=0 Feb 19 00:28:48 crc kubenswrapper[5108]: I0219 00:28:48.322858 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b","Type":"ContainerDied","Data":"a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c"} Feb 19 00:28:48 crc kubenswrapper[5108]: I0219 00:28:48.323374 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b","Type":"ContainerStarted","Data":"694cddd6da0e92fdbf5cc93084f58176a690272d3d34b1cd6bd7a95815a4da4e"} Feb 19 00:28:49 crc kubenswrapper[5108]: I0219 00:28:49.334403 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b","Type":"ContainerStarted","Data":"5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a"} Feb 19 00:28:49 crc kubenswrapper[5108]: I0219 00:28:49.372324 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.372295672 podStartE2EDuration="3.372295672s" podCreationTimestamp="2026-02-19 00:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:28:49.36848224 +0000 UTC m=+1188.335128568" watchObservedRunningTime="2026-02-19 00:28:49.372295672 +0000 UTC m=+1188.338942010" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.321233 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.323008 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" containerName="docker-build" containerID="cri-o://5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a" gracePeriod=30 Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.791212 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b/docker-build/0.log" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.792330 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858273 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-system-configs\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858332 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-run\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858370 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-root\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858391 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfgl\" (UniqueName: \"kubernetes.io/projected/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-kube-api-access-lwfgl\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858455 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-push\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858477 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-node-pullsecrets\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858502 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-blob-cache\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858550 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-ca-bundles\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858571 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-pull\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildcachedir\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858617 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-proxy-ca-bundles\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.858659 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildworkdir\") pod \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\" (UID: \"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b\") " Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.859292 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.859463 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.859460 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.859501 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.859250 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.861708 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.861855 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.865698 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.865762 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.869066 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-kube-api-access-lwfgl" (OuterVolumeSpecName: "kube-api-access-lwfgl") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "kube-api-access-lwfgl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.928371 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960645 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960679 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960689 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960722 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960735 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960744 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960752 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960760 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960768 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960800 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:57 crc kubenswrapper[5108]: I0219 00:28:57.960813 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lwfgl\" (UniqueName: \"kubernetes.io/projected/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-kube-api-access-lwfgl\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.245075 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" (UID: "b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.264020 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.409231 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b/docker-build/0.log" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.410806 5108 generic.go:358] "Generic (PLEG): container finished" podID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" containerID="5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a" exitCode=1 Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.410986 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.411013 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b","Type":"ContainerDied","Data":"5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a"} Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.411094 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b","Type":"ContainerDied","Data":"694cddd6da0e92fdbf5cc93084f58176a690272d3d34b1cd6bd7a95815a4da4e"} Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.411125 5108 scope.go:117] "RemoveContainer" containerID="5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.472188 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.475887 5108 scope.go:117] "RemoveContainer" containerID="a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.483155 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.573594 5108 scope.go:117] "RemoveContainer" containerID="5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a" Feb 19 00:28:58 crc kubenswrapper[5108]: E0219 00:28:58.574197 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a\": container with ID starting with 5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a not found: ID does not exist" containerID="5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.574237 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a"} err="failed to get container status \"5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a\": rpc error: code = NotFound desc = could not find container \"5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a\": container with ID starting with 5ff04d2aa7ed510bc981aefbe9f51ec6a58209e38c3784dfa757f44dd250de9a not found: ID does not exist" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.574260 5108 scope.go:117] "RemoveContainer" containerID="a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c" Feb 19 00:28:58 crc kubenswrapper[5108]: E0219 00:28:58.574575 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c\": container with ID starting with a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c not found: ID does not exist" containerID="a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.574602 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c"} err="failed to get container status \"a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c\": rpc error: code = NotFound desc = could not find container \"a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c\": container with ID starting with a39fccd91823a24a7bfb05bbd09029b7c10f61a2e6c72c75a8123f2c5296471c not found: ID does not exist" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.980929 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.983865 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" containerName="docker-build" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.983912 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" containerName="docker-build" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.983961 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" containerName="manage-dockerfile" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.983975 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" containerName="manage-dockerfile" Feb 19 00:28:58 crc kubenswrapper[5108]: I0219 00:28:58.984207 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" containerName="docker-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.123856 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.124061 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.127088 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.127182 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.128193 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.128219 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.291778 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.291872 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-push\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.291910 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292015 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxszc\" (UniqueName: \"kubernetes.io/projected/46f058bd-e4be-4633-ad51-c27868fc8eda-kube-api-access-jxszc\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292097 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-pull\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292218 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292249 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292285 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292401 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292430 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292554 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.292585 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.393760 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.393820 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.393877 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.393925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-push\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.393985 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394017 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jxszc\" (UniqueName: \"kubernetes.io/projected/46f058bd-e4be-4633-ad51-c27868fc8eda-kube-api-access-jxszc\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394059 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-pull\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.393927 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394283 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394317 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394364 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394497 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394606 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.394789 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.395292 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.395627 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.396078 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.396104 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.396649 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.396686 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.404669 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-pull\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.408289 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-push\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.417305 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxszc\" (UniqueName: \"kubernetes.io/projected/46f058bd-e4be-4633-ad51-c27868fc8eda-kube-api-access-jxszc\") pod \"sg-bridge-2-build\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.440897 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.648984 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 19 00:28:59 crc kubenswrapper[5108]: I0219 00:28:59.856717 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b" path="/var/lib/kubelet/pods/b005c6b3-2eef-46f5-8152-0e1ebf5dcb2b/volumes" Feb 19 00:29:00 crc kubenswrapper[5108]: I0219 00:29:00.431290 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"46f058bd-e4be-4633-ad51-c27868fc8eda","Type":"ContainerStarted","Data":"0b946f2661fd163da5f822bbb10de4526f446b0571e56ffa6d85a97018d2bdb3"} Feb 19 00:29:00 crc kubenswrapper[5108]: I0219 00:29:00.431710 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"46f058bd-e4be-4633-ad51-c27868fc8eda","Type":"ContainerStarted","Data":"8655d18110a5f3e2aad5b0d2c8a9ab8b34f3554dd68ab8622ca30659cbe50e73"} Feb 19 00:29:01 crc kubenswrapper[5108]: I0219 00:29:01.438503 5108 generic.go:358] "Generic (PLEG): container finished" podID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerID="0b946f2661fd163da5f822bbb10de4526f446b0571e56ffa6d85a97018d2bdb3" exitCode=0 Feb 19 00:29:01 crc kubenswrapper[5108]: I0219 00:29:01.438659 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"46f058bd-e4be-4633-ad51-c27868fc8eda","Type":"ContainerDied","Data":"0b946f2661fd163da5f822bbb10de4526f446b0571e56ffa6d85a97018d2bdb3"} Feb 19 00:29:02 crc kubenswrapper[5108]: I0219 00:29:02.384988 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:29:02 crc kubenswrapper[5108]: I0219 00:29:02.385133 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:29:02 crc kubenswrapper[5108]: I0219 00:29:02.394448 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:29:02 crc kubenswrapper[5108]: I0219 00:29:02.394470 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:29:02 crc kubenswrapper[5108]: I0219 00:29:02.447220 5108 generic.go:358] "Generic (PLEG): container finished" podID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerID="8c23c73081c2694a19e28f8376fc1044a1e7fbdf86d68f07964b15018fcc965d" exitCode=0 Feb 19 00:29:02 crc kubenswrapper[5108]: I0219 00:29:02.447389 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"46f058bd-e4be-4633-ad51-c27868fc8eda","Type":"ContainerDied","Data":"8c23c73081c2694a19e28f8376fc1044a1e7fbdf86d68f07964b15018fcc965d"} Feb 19 00:29:02 crc kubenswrapper[5108]: I0219 00:29:02.500226 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_46f058bd-e4be-4633-ad51-c27868fc8eda/manage-dockerfile/0.log" Feb 19 00:29:03 crc kubenswrapper[5108]: I0219 00:29:03.457802 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"46f058bd-e4be-4633-ad51-c27868fc8eda","Type":"ContainerStarted","Data":"df744d0ebfe0ffebc0ac5fcd9ef15b75715d47ae860170bdf7b3f0e5f4fc56f5"} Feb 19 00:29:03 crc kubenswrapper[5108]: I0219 00:29:03.488886 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.488870505 podStartE2EDuration="5.488870505s" podCreationTimestamp="2026-02-19 00:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:29:03.484033736 +0000 UTC m=+1202.450680064" watchObservedRunningTime="2026-02-19 00:29:03.488870505 +0000 UTC m=+1202.455516813" Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.145465 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.145825 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.145878 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.146604 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d81d337fd772fc475aa1e34f1691df7c5878b03eaa535cbcff5e87cd3b6dc50"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.146684 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://2d81d337fd772fc475aa1e34f1691df7c5878b03eaa535cbcff5e87cd3b6dc50" gracePeriod=600 Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.481204 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="2d81d337fd772fc475aa1e34f1691df7c5878b03eaa535cbcff5e87cd3b6dc50" exitCode=0 Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.481301 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"2d81d337fd772fc475aa1e34f1691df7c5878b03eaa535cbcff5e87cd3b6dc50"} Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.481625 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"d38f558a933051f6d4612f6c63794db418d969c28d49c059a3a7b5256e907c6f"} Feb 19 00:29:06 crc kubenswrapper[5108]: I0219 00:29:06.481651 5108 scope.go:117] "RemoveContainer" containerID="4cfa792aa453a077c6bdecc7c8970848374d26dd08be250811638f0ac93b7f02" Feb 19 00:29:49 crc kubenswrapper[5108]: I0219 00:29:49.813805 5108 generic.go:358] "Generic (PLEG): container finished" podID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerID="df744d0ebfe0ffebc0ac5fcd9ef15b75715d47ae860170bdf7b3f0e5f4fc56f5" exitCode=0 Feb 19 00:29:49 crc kubenswrapper[5108]: I0219 00:29:49.813892 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"46f058bd-e4be-4633-ad51-c27868fc8eda","Type":"ContainerDied","Data":"df744d0ebfe0ffebc0ac5fcd9ef15b75715d47ae860170bdf7b3f0e5f4fc56f5"} Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.113146 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148334 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-ca-bundles\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148425 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-node-pullsecrets\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148467 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-system-configs\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148566 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-proxy-ca-bundles\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148610 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-pull\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148698 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-buildcachedir\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148751 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-run\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148798 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-push\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148844 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-root\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148891 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxszc\" (UniqueName: \"kubernetes.io/projected/46f058bd-e4be-4633-ad51-c27868fc8eda-kube-api-access-jxszc\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148924 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-build-blob-cache\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.148986 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-buildworkdir\") pod \"46f058bd-e4be-4633-ad51-c27868fc8eda\" (UID: \"46f058bd-e4be-4633-ad51-c27868fc8eda\") " Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.153056 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.155673 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.155870 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.156500 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.157024 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.158811 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.160063 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.192083 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.192679 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.192891 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f058bd-e4be-4633-ad51-c27868fc8eda-kube-api-access-jxszc" (OuterVolumeSpecName: "kube-api-access-jxszc") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "kube-api-access-jxszc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.252816 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jxszc\" (UniqueName: \"kubernetes.io/projected/46f058bd-e4be-4633-ad51-c27868fc8eda-kube-api-access-jxszc\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.252865 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.252879 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.252890 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.252902 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.252914 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46f058bd-e4be-4633-ad51-c27868fc8eda-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.252925 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.253139 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46f058bd-e4be-4633-ad51-c27868fc8eda-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.253154 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.253167 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/46f058bd-e4be-4633-ad51-c27868fc8eda-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.284393 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.354577 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.832249 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"46f058bd-e4be-4633-ad51-c27868fc8eda","Type":"ContainerDied","Data":"8655d18110a5f3e2aad5b0d2c8a9ab8b34f3554dd68ab8622ca30659cbe50e73"} Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.832295 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8655d18110a5f3e2aad5b0d2c8a9ab8b34f3554dd68ab8622ca30659cbe50e73" Feb 19 00:29:51 crc kubenswrapper[5108]: I0219 00:29:51.832401 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 19 00:29:52 crc kubenswrapper[5108]: I0219 00:29:52.006157 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "46f058bd-e4be-4633-ad51-c27868fc8eda" (UID: "46f058bd-e4be-4633-ad51-c27868fc8eda"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:29:52 crc kubenswrapper[5108]: I0219 00:29:52.065463 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46f058bd-e4be-4633-ad51-c27868fc8eda-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:55 crc kubenswrapper[5108]: I0219 00:29:55.624513 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:29:55 crc kubenswrapper[5108]: I0219 00:29:55.625444 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerName="git-clone" Feb 19 00:29:55 crc kubenswrapper[5108]: I0219 00:29:55.625459 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerName="git-clone" Feb 19 00:29:55 crc kubenswrapper[5108]: I0219 00:29:55.625475 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerName="manage-dockerfile" Feb 19 00:29:55 crc kubenswrapper[5108]: I0219 00:29:55.625480 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerName="manage-dockerfile" Feb 19 00:29:55 crc kubenswrapper[5108]: I0219 00:29:55.625511 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerName="docker-build" Feb 19 00:29:55 crc kubenswrapper[5108]: I0219 00:29:55.625517 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerName="docker-build" Feb 19 00:29:55 crc kubenswrapper[5108]: I0219 00:29:55.625619 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="46f058bd-e4be-4633-ad51-c27868fc8eda" containerName="docker-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.012077 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.012251 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.016172 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.016390 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.016427 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.016600 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087160 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087238 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087268 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087290 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087326 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087364 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087421 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087452 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087488 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzzk7\" (UniqueName: \"kubernetes.io/projected/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-kube-api-access-lzzk7\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087517 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087537 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.087585 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lzzk7\" (UniqueName: \"kubernetes.io/projected/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-kube-api-access-lzzk7\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189371 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189395 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189489 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189511 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189533 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189555 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189584 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189625 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189680 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.189707 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.190176 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.190667 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.190951 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.191276 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.191618 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.191654 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.192318 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.192483 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.192525 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.196923 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.203366 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.206337 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzzk7\" (UniqueName: \"kubernetes.io/projected/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-kube-api-access-lzzk7\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.347313 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.586366 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:29:56 crc kubenswrapper[5108]: I0219 00:29:56.870680 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570","Type":"ContainerStarted","Data":"7b27a726a5697f7787ba296bad0c0700971ef5feaefe7046f33f439ce52bf228"} Feb 19 00:29:57 crc kubenswrapper[5108]: I0219 00:29:57.882108 5108 generic.go:358] "Generic (PLEG): container finished" podID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" containerID="ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774" exitCode=0 Feb 19 00:29:57 crc kubenswrapper[5108]: I0219 00:29:57.882259 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570","Type":"ContainerDied","Data":"ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774"} Feb 19 00:29:58 crc kubenswrapper[5108]: I0219 00:29:58.896221 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570","Type":"ContainerStarted","Data":"2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690"} Feb 19 00:29:58 crc kubenswrapper[5108]: I0219 00:29:58.926110 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.926075595 podStartE2EDuration="3.926075595s" podCreationTimestamp="2026-02-19 00:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:29:58.925009937 +0000 UTC m=+1257.891656265" watchObservedRunningTime="2026-02-19 00:29:58.926075595 +0000 UTC m=+1257.892721903" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.136453 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877"] Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.141364 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.142683 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524350-gs8s6"] Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.143784 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.143865 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.147706 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.147767 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877"] Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.149907 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.150307 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.150988 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.153736 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524350-gs8s6"] Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.250668 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45e993e0-0946-4d9a-8129-9bf68727178d-secret-volume\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.250807 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcgj\" (UniqueName: \"kubernetes.io/projected/45e993e0-0946-4d9a-8129-9bf68727178d-kube-api-access-8fcgj\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.251126 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtslp\" (UniqueName: \"kubernetes.io/projected/782e3f41-8f60-44c0-80b1-bb38f5fdee23-kube-api-access-wtslp\") pod \"auto-csr-approver-29524350-gs8s6\" (UID: \"782e3f41-8f60-44c0-80b1-bb38f5fdee23\") " pod="openshift-infra/auto-csr-approver-29524350-gs8s6" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.251181 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e993e0-0946-4d9a-8129-9bf68727178d-config-volume\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.352479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45e993e0-0946-4d9a-8129-9bf68727178d-secret-volume\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.352529 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8fcgj\" (UniqueName: \"kubernetes.io/projected/45e993e0-0946-4d9a-8129-9bf68727178d-kube-api-access-8fcgj\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.352789 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtslp\" (UniqueName: \"kubernetes.io/projected/782e3f41-8f60-44c0-80b1-bb38f5fdee23-kube-api-access-wtslp\") pod \"auto-csr-approver-29524350-gs8s6\" (UID: \"782e3f41-8f60-44c0-80b1-bb38f5fdee23\") " pod="openshift-infra/auto-csr-approver-29524350-gs8s6" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.352878 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e993e0-0946-4d9a-8129-9bf68727178d-config-volume\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.353914 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e993e0-0946-4d9a-8129-9bf68727178d-config-volume\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.365409 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45e993e0-0946-4d9a-8129-9bf68727178d-secret-volume\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.370546 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fcgj\" (UniqueName: \"kubernetes.io/projected/45e993e0-0946-4d9a-8129-9bf68727178d-kube-api-access-8fcgj\") pod \"collect-profiles-29524350-mr877\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.376012 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtslp\" (UniqueName: \"kubernetes.io/projected/782e3f41-8f60-44c0-80b1-bb38f5fdee23-kube-api-access-wtslp\") pod \"auto-csr-approver-29524350-gs8s6\" (UID: \"782e3f41-8f60-44c0-80b1-bb38f5fdee23\") " pod="openshift-infra/auto-csr-approver-29524350-gs8s6" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.462372 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.475952 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.735071 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524350-gs8s6"] Feb 19 00:30:00 crc kubenswrapper[5108]: W0219 00:30:00.743504 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod782e3f41_8f60_44c0_80b1_bb38f5fdee23.slice/crio-507bee4e1b73c5cdf35b47f5c7be6f0fad037db96d346b18015b39d2e64f157a WatchSource:0}: Error finding container 507bee4e1b73c5cdf35b47f5c7be6f0fad037db96d346b18015b39d2e64f157a: Status 404 returned error can't find the container with id 507bee4e1b73c5cdf35b47f5c7be6f0fad037db96d346b18015b39d2e64f157a Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.899129 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877"] Feb 19 00:30:00 crc kubenswrapper[5108]: W0219 00:30:00.916297 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45e993e0_0946_4d9a_8129_9bf68727178d.slice/crio-013f5db87fc058ae46bda67194a9c0a15edc86bfb4a574347673b451f8fb1ce1 WatchSource:0}: Error finding container 013f5db87fc058ae46bda67194a9c0a15edc86bfb4a574347673b451f8fb1ce1: Status 404 returned error can't find the container with id 013f5db87fc058ae46bda67194a9c0a15edc86bfb4a574347673b451f8fb1ce1 Feb 19 00:30:00 crc kubenswrapper[5108]: I0219 00:30:00.919331 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" event={"ID":"782e3f41-8f60-44c0-80b1-bb38f5fdee23","Type":"ContainerStarted","Data":"507bee4e1b73c5cdf35b47f5c7be6f0fad037db96d346b18015b39d2e64f157a"} Feb 19 00:30:01 crc kubenswrapper[5108]: I0219 00:30:01.926804 5108 generic.go:358] "Generic (PLEG): container finished" podID="45e993e0-0946-4d9a-8129-9bf68727178d" containerID="511370a638ad76ecd7817962e149f87ff8b4dcdcd22672649e93652967246328" exitCode=0 Feb 19 00:30:01 crc kubenswrapper[5108]: I0219 00:30:01.926858 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" event={"ID":"45e993e0-0946-4d9a-8129-9bf68727178d","Type":"ContainerDied","Data":"511370a638ad76ecd7817962e149f87ff8b4dcdcd22672649e93652967246328"} Feb 19 00:30:01 crc kubenswrapper[5108]: I0219 00:30:01.927393 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" event={"ID":"45e993e0-0946-4d9a-8129-9bf68727178d","Type":"ContainerStarted","Data":"013f5db87fc058ae46bda67194a9c0a15edc86bfb4a574347673b451f8fb1ce1"} Feb 19 00:30:02 crc kubenswrapper[5108]: I0219 00:30:02.935731 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" event={"ID":"782e3f41-8f60-44c0-80b1-bb38f5fdee23","Type":"ContainerStarted","Data":"66f642a0f642401e41f33077ff09bd4fde158cf004982d956101dd1026c22d23"} Feb 19 00:30:02 crc kubenswrapper[5108]: I0219 00:30:02.950749 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" podStartSLOduration=1.35511639 podStartE2EDuration="2.950725586s" podCreationTimestamp="2026-02-19 00:30:00 +0000 UTC" firstStartedPulling="2026-02-19 00:30:00.744914965 +0000 UTC m=+1259.711561273" lastFinishedPulling="2026-02-19 00:30:02.340524151 +0000 UTC m=+1261.307170469" observedRunningTime="2026-02-19 00:30:02.950000367 +0000 UTC m=+1261.916646695" watchObservedRunningTime="2026-02-19 00:30:02.950725586 +0000 UTC m=+1261.917371894" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.174699 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.293799 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fcgj\" (UniqueName: \"kubernetes.io/projected/45e993e0-0946-4d9a-8129-9bf68727178d-kube-api-access-8fcgj\") pod \"45e993e0-0946-4d9a-8129-9bf68727178d\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.293886 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e993e0-0946-4d9a-8129-9bf68727178d-config-volume\") pod \"45e993e0-0946-4d9a-8129-9bf68727178d\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.294070 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45e993e0-0946-4d9a-8129-9bf68727178d-secret-volume\") pod \"45e993e0-0946-4d9a-8129-9bf68727178d\" (UID: \"45e993e0-0946-4d9a-8129-9bf68727178d\") " Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.294864 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e993e0-0946-4d9a-8129-9bf68727178d-config-volume" (OuterVolumeSpecName: "config-volume") pod "45e993e0-0946-4d9a-8129-9bf68727178d" (UID: "45e993e0-0946-4d9a-8129-9bf68727178d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.299752 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e993e0-0946-4d9a-8129-9bf68727178d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "45e993e0-0946-4d9a-8129-9bf68727178d" (UID: "45e993e0-0946-4d9a-8129-9bf68727178d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.301075 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e993e0-0946-4d9a-8129-9bf68727178d-kube-api-access-8fcgj" (OuterVolumeSpecName: "kube-api-access-8fcgj") pod "45e993e0-0946-4d9a-8129-9bf68727178d" (UID: "45e993e0-0946-4d9a-8129-9bf68727178d"). InnerVolumeSpecName "kube-api-access-8fcgj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.395583 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45e993e0-0946-4d9a-8129-9bf68727178d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.395619 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8fcgj\" (UniqueName: \"kubernetes.io/projected/45e993e0-0946-4d9a-8129-9bf68727178d-kube-api-access-8fcgj\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.395628 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e993e0-0946-4d9a-8129-9bf68727178d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.945957 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.947214 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-mr877" event={"ID":"45e993e0-0946-4d9a-8129-9bf68727178d","Type":"ContainerDied","Data":"013f5db87fc058ae46bda67194a9c0a15edc86bfb4a574347673b451f8fb1ce1"} Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.947245 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="013f5db87fc058ae46bda67194a9c0a15edc86bfb4a574347673b451f8fb1ce1" Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.949193 5108 generic.go:358] "Generic (PLEG): container finished" podID="782e3f41-8f60-44c0-80b1-bb38f5fdee23" containerID="66f642a0f642401e41f33077ff09bd4fde158cf004982d956101dd1026c22d23" exitCode=0 Feb 19 00:30:03 crc kubenswrapper[5108]: I0219 00:30:03.949259 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" event={"ID":"782e3f41-8f60-44c0-80b1-bb38f5fdee23","Type":"ContainerDied","Data":"66f642a0f642401e41f33077ff09bd4fde158cf004982d956101dd1026c22d23"} Feb 19 00:30:05 crc kubenswrapper[5108]: I0219 00:30:05.169430 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" Feb 19 00:30:05 crc kubenswrapper[5108]: I0219 00:30:05.319004 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtslp\" (UniqueName: \"kubernetes.io/projected/782e3f41-8f60-44c0-80b1-bb38f5fdee23-kube-api-access-wtslp\") pod \"782e3f41-8f60-44c0-80b1-bb38f5fdee23\" (UID: \"782e3f41-8f60-44c0-80b1-bb38f5fdee23\") " Feb 19 00:30:05 crc kubenswrapper[5108]: I0219 00:30:05.329598 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782e3f41-8f60-44c0-80b1-bb38f5fdee23-kube-api-access-wtslp" (OuterVolumeSpecName: "kube-api-access-wtslp") pod "782e3f41-8f60-44c0-80b1-bb38f5fdee23" (UID: "782e3f41-8f60-44c0-80b1-bb38f5fdee23"). InnerVolumeSpecName "kube-api-access-wtslp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:30:05 crc kubenswrapper[5108]: I0219 00:30:05.420398 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wtslp\" (UniqueName: \"kubernetes.io/projected/782e3f41-8f60-44c0-80b1-bb38f5fdee23-kube-api-access-wtslp\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:05 crc kubenswrapper[5108]: I0219 00:30:05.969494 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" Feb 19 00:30:05 crc kubenswrapper[5108]: I0219 00:30:05.970359 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524350-gs8s6" event={"ID":"782e3f41-8f60-44c0-80b1-bb38f5fdee23","Type":"ContainerDied","Data":"507bee4e1b73c5cdf35b47f5c7be6f0fad037db96d346b18015b39d2e64f157a"} Feb 19 00:30:05 crc kubenswrapper[5108]: I0219 00:30:05.970439 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="507bee4e1b73c5cdf35b47f5c7be6f0fad037db96d346b18015b39d2e64f157a" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.019488 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524344-t5jjq"] Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.026149 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524344-t5jjq"] Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.250765 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.251473 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" containerName="docker-build" containerID="cri-o://2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690" gracePeriod=30 Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.689336 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570/docker-build/0.log" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.690128 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.840347 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-node-pullsecrets\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.840559 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.840566 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-pull\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.840717 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-system-configs\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.840764 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-root\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.840800 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzzk7\" (UniqueName: \"kubernetes.io/projected/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-kube-api-access-lzzk7\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.840833 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-push\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.840856 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-ca-bundles\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841010 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-blob-cache\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841051 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildworkdir\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841099 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildcachedir\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841120 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-proxy-ca-bundles\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841241 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-run\") pod \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\" (UID: \"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570\") " Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841674 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841721 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841776 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.841803 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.842353 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.842401 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.842469 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.842712 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.846671 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.847250 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.849516 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-kube-api-access-lzzk7" (OuterVolumeSpecName: "kube-api-access-lzzk7") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "kube-api-access-lzzk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.907071 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943469 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943510 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943520 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943527 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lzzk7\" (UniqueName: \"kubernetes.io/projected/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-kube-api-access-lzzk7\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943536 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943544 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943552 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943560 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.943568 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.982894 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570/docker-build/0.log" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.983711 5108 generic.go:358] "Generic (PLEG): container finished" podID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" containerID="2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690" exitCode=1 Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.983885 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570","Type":"ContainerDied","Data":"2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690"} Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.983899 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.983964 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570","Type":"ContainerDied","Data":"7b27a726a5697f7787ba296bad0c0700971ef5feaefe7046f33f439ce52bf228"} Feb 19 00:30:06 crc kubenswrapper[5108]: I0219 00:30:06.983997 5108 scope.go:117] "RemoveContainer" containerID="2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.013006 5108 scope.go:117] "RemoveContainer" containerID="ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.108029 5108 scope.go:117] "RemoveContainer" containerID="2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690" Feb 19 00:30:07 crc kubenswrapper[5108]: E0219 00:30:07.108592 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690\": container with ID starting with 2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690 not found: ID does not exist" containerID="2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.108657 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690"} err="failed to get container status \"2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690\": rpc error: code = NotFound desc = could not find container \"2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690\": container with ID starting with 2e75e3e78e782a53819161718da1801f3b7a90c0aa7783030ca4176f022b3690 not found: ID does not exist" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.108707 5108 scope.go:117] "RemoveContainer" containerID="ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774" Feb 19 00:30:07 crc kubenswrapper[5108]: E0219 00:30:07.109191 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774\": container with ID starting with ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774 not found: ID does not exist" containerID="ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.109264 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774"} err="failed to get container status \"ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774\": rpc error: code = NotFound desc = could not find container \"ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774\": container with ID starting with ae09ea8199fcaaa92b25ef31c85295cedf86d248f346e15c995a91e435a9f774 not found: ID does not exist" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.274695 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" (UID: "a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.329635 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.338091 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.350309 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.863183 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e29f3d5-601c-46a8-b7a7-8732fb1137f6" path="/var/lib/kubelet/pods/2e29f3d5-601c-46a8-b7a7-8732fb1137f6/volumes" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.864757 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" path="/var/lib/kubelet/pods/a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570/volumes" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.891792 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892564 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" containerName="manage-dockerfile" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892580 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" containerName="manage-dockerfile" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892597 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" containerName="docker-build" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892606 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" containerName="docker-build" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892628 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="782e3f41-8f60-44c0-80b1-bb38f5fdee23" containerName="oc" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892636 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="782e3f41-8f60-44c0-80b1-bb38f5fdee23" containerName="oc" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892666 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45e993e0-0946-4d9a-8129-9bf68727178d" containerName="collect-profiles" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892675 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e993e0-0946-4d9a-8129-9bf68727178d" containerName="collect-profiles" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892799 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="782e3f41-8f60-44c0-80b1-bb38f5fdee23" containerName="oc" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892811 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="a2b90cd1-7f90-4aaf-abcb-65c1ddc8f570" containerName="docker-build" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.892825 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="45e993e0-0946-4d9a-8129-9bf68727178d" containerName="collect-profiles" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.898558 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.902159 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.902159 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.902463 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.906057 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Feb 19 00:30:07 crc kubenswrapper[5108]: I0219 00:30:07.919208 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.060771 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.060872 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.060903 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvbl\" (UniqueName: \"kubernetes.io/projected/b3d192c6-7c88-446b-a978-fad852e1df00-kube-api-access-5bvbl\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.060977 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.061020 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.061073 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.061164 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.061306 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.061403 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.061526 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.061622 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.061657 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.163721 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.163813 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvbl\" (UniqueName: \"kubernetes.io/projected/b3d192c6-7c88-446b-a978-fad852e1df00-kube-api-access-5bvbl\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.163879 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.163974 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164040 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164092 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164140 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164193 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164265 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164337 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164369 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164424 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164504 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.164916 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.165177 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.165658 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.165826 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.166483 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.166554 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.166499 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.166871 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.180526 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.182485 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.194291 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvbl\" (UniqueName: \"kubernetes.io/projected/b3d192c6-7c88-446b-a978-fad852e1df00-kube-api-access-5bvbl\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.227339 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:30:08 crc kubenswrapper[5108]: I0219 00:30:08.501244 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 19 00:30:08 crc kubenswrapper[5108]: W0219 00:30:08.514751 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3d192c6_7c88_446b_a978_fad852e1df00.slice/crio-5a5b68e091919e2884cff5d5f3b72990c9342c9cf5df75effc9af261c53d9655 WatchSource:0}: Error finding container 5a5b68e091919e2884cff5d5f3b72990c9342c9cf5df75effc9af261c53d9655: Status 404 returned error can't find the container with id 5a5b68e091919e2884cff5d5f3b72990c9342c9cf5df75effc9af261c53d9655 Feb 19 00:30:09 crc kubenswrapper[5108]: I0219 00:30:09.009002 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b3d192c6-7c88-446b-a978-fad852e1df00","Type":"ContainerStarted","Data":"45ce09d973d261a52f70fdeb651fa37acfc06c7f51735523e6b005a16a241654"} Feb 19 00:30:09 crc kubenswrapper[5108]: I0219 00:30:09.009089 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b3d192c6-7c88-446b-a978-fad852e1df00","Type":"ContainerStarted","Data":"5a5b68e091919e2884cff5d5f3b72990c9342c9cf5df75effc9af261c53d9655"} Feb 19 00:30:09 crc kubenswrapper[5108]: E0219 00:30:09.148875 5108 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.234:49542->38.102.83.234:33243: write tcp 38.102.83.234:49542->38.102.83.234:33243: write: connection reset by peer Feb 19 00:30:10 crc kubenswrapper[5108]: I0219 00:30:10.016130 5108 scope.go:117] "RemoveContainer" containerID="9514bb9f2625d646cf002ffa3858130d17baba4898167e8e98be95535d7e38cb" Feb 19 00:30:10 crc kubenswrapper[5108]: I0219 00:30:10.018520 5108 generic.go:358] "Generic (PLEG): container finished" podID="b3d192c6-7c88-446b-a978-fad852e1df00" containerID="45ce09d973d261a52f70fdeb651fa37acfc06c7f51735523e6b005a16a241654" exitCode=0 Feb 19 00:30:10 crc kubenswrapper[5108]: I0219 00:30:10.018740 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b3d192c6-7c88-446b-a978-fad852e1df00","Type":"ContainerDied","Data":"45ce09d973d261a52f70fdeb651fa37acfc06c7f51735523e6b005a16a241654"} Feb 19 00:30:11 crc kubenswrapper[5108]: I0219 00:30:11.029875 5108 generic.go:358] "Generic (PLEG): container finished" podID="b3d192c6-7c88-446b-a978-fad852e1df00" containerID="9dbff25de3218e0625ff38b48fd0301748d2f45fcb1cdff838ac84b8b3373b81" exitCode=0 Feb 19 00:30:11 crc kubenswrapper[5108]: I0219 00:30:11.029929 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b3d192c6-7c88-446b-a978-fad852e1df00","Type":"ContainerDied","Data":"9dbff25de3218e0625ff38b48fd0301748d2f45fcb1cdff838ac84b8b3373b81"} Feb 19 00:30:11 crc kubenswrapper[5108]: I0219 00:30:11.075175 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_b3d192c6-7c88-446b-a978-fad852e1df00/manage-dockerfile/0.log" Feb 19 00:30:12 crc kubenswrapper[5108]: I0219 00:30:12.046685 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b3d192c6-7c88-446b-a978-fad852e1df00","Type":"ContainerStarted","Data":"3092c9be6f33f74b8f7c44970b6363810762e7241def93af7909e92699ecdd93"} Feb 19 00:30:12 crc kubenswrapper[5108]: I0219 00:30:12.081126 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.081107855 podStartE2EDuration="5.081107855s" podCreationTimestamp="2026-02-19 00:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:30:12.074646972 +0000 UTC m=+1271.041293380" watchObservedRunningTime="2026-02-19 00:30:12.081107855 +0000 UTC m=+1271.047754153" Feb 19 00:31:03 crc kubenswrapper[5108]: I0219 00:31:03.464170 5108 generic.go:358] "Generic (PLEG): container finished" podID="b3d192c6-7c88-446b-a978-fad852e1df00" containerID="3092c9be6f33f74b8f7c44970b6363810762e7241def93af7909e92699ecdd93" exitCode=0 Feb 19 00:31:03 crc kubenswrapper[5108]: I0219 00:31:03.464695 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b3d192c6-7c88-446b-a978-fad852e1df00","Type":"ContainerDied","Data":"3092c9be6f33f74b8f7c44970b6363810762e7241def93af7909e92699ecdd93"} Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.719110 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.874861 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bvbl\" (UniqueName: \"kubernetes.io/projected/b3d192c6-7c88-446b-a978-fad852e1df00-kube-api-access-5bvbl\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.874951 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-system-configs\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875009 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-push\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875092 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-buildworkdir\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875145 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-root\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875196 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-proxy-ca-bundles\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875261 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-pull\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875354 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-buildcachedir\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875407 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-node-pullsecrets\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875503 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-build-blob-cache\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875532 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-ca-bundles\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875599 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-run\") pod \"b3d192c6-7c88-446b-a978-fad852e1df00\" (UID: \"b3d192c6-7c88-446b-a978-fad852e1df00\") " Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875663 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.875703 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.877126 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.877627 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.879051 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.879224 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.882152 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.885075 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.885570 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d192c6-7c88-446b-a978-fad852e1df00-kube-api-access-5bvbl" (OuterVolumeSpecName: "kube-api-access-5bvbl") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "kube-api-access-5bvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.888495 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890441 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890472 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890481 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b3d192c6-7c88-446b-a978-fad852e1df00-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890488 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890496 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890505 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5bvbl\" (UniqueName: \"kubernetes.io/projected/b3d192c6-7c88-446b-a978-fad852e1df00-kube-api-access-5bvbl\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890514 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890523 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/b3d192c6-7c88-446b-a978-fad852e1df00-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890533 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.890541 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3d192c6-7c88-446b-a978-fad852e1df00-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.979220 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:04 crc kubenswrapper[5108]: I0219 00:31:04.992049 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:05 crc kubenswrapper[5108]: I0219 00:31:05.481875 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b3d192c6-7c88-446b-a978-fad852e1df00","Type":"ContainerDied","Data":"5a5b68e091919e2884cff5d5f3b72990c9342c9cf5df75effc9af261c53d9655"} Feb 19 00:31:05 crc kubenswrapper[5108]: I0219 00:31:05.481923 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a5b68e091919e2884cff5d5f3b72990c9342c9cf5df75effc9af261c53d9655" Feb 19 00:31:05 crc kubenswrapper[5108]: I0219 00:31:05.482050 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:31:05 crc kubenswrapper[5108]: I0219 00:31:05.764550 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b3d192c6-7c88-446b-a978-fad852e1df00" (UID: "b3d192c6-7c88-446b-a978-fad852e1df00"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:05 crc kubenswrapper[5108]: I0219 00:31:05.802991 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b3d192c6-7c88-446b-a978-fad852e1df00-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:06 crc kubenswrapper[5108]: I0219 00:31:06.145172 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:31:06 crc kubenswrapper[5108]: I0219 00:31:06.145265 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.132327 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.136392 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3d192c6-7c88-446b-a978-fad852e1df00" containerName="manage-dockerfile" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.136529 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d192c6-7c88-446b-a978-fad852e1df00" containerName="manage-dockerfile" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.136643 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3d192c6-7c88-446b-a978-fad852e1df00" containerName="docker-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.136732 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d192c6-7c88-446b-a978-fad852e1df00" containerName="docker-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.136827 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3d192c6-7c88-446b-a978-fad852e1df00" containerName="git-clone" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.136925 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d192c6-7c88-446b-a978-fad852e1df00" containerName="git-clone" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.137198 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3d192c6-7c88-446b-a978-fad852e1df00" containerName="docker-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.143629 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.148664 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-sys-config\"" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.149045 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-ca\"" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.149231 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-global-ca\"" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.149337 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.151565 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.222779 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.222829 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.222895 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.222915 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.222952 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.222986 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.223014 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.223029 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.223044 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc7r2\" (UniqueName: \"kubernetes.io/projected/0743f890-d6a1-4905-871b-d7bb01df4041-kube-api-access-lc7r2\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.223064 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.223087 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.223105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324228 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324281 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324329 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324356 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324408 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324431 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324456 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324492 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324567 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324573 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324598 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324625 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324649 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lc7r2\" (UniqueName: \"kubernetes.io/projected/0743f890-d6a1-4905-871b-d7bb01df4041-kube-api-access-lc7r2\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324678 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324758 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.324964 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.325362 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.325628 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.325835 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.325854 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.326197 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.333016 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.333026 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.345447 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc7r2\" (UniqueName: \"kubernetes.io/projected/0743f890-d6a1-4905-871b-d7bb01df4041-kube-api-access-lc7r2\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.481904 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.752187 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 19 00:31:14 crc kubenswrapper[5108]: I0219 00:31:14.787593 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:31:15 crc kubenswrapper[5108]: I0219 00:31:15.556400 5108 generic.go:358] "Generic (PLEG): container finished" podID="0743f890-d6a1-4905-871b-d7bb01df4041" containerID="5320edacf2395e374027032f6804668ed1ed9ac5cfd6c437a2e21d95b8711c9d" exitCode=0 Feb 19 00:31:15 crc kubenswrapper[5108]: I0219 00:31:15.556494 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"0743f890-d6a1-4905-871b-d7bb01df4041","Type":"ContainerDied","Data":"5320edacf2395e374027032f6804668ed1ed9ac5cfd6c437a2e21d95b8711c9d"} Feb 19 00:31:15 crc kubenswrapper[5108]: I0219 00:31:15.556518 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"0743f890-d6a1-4905-871b-d7bb01df4041","Type":"ContainerStarted","Data":"24abc967843f90fa9ae670ff5055eada47b402f76cac93dfbe3c373618ded6bf"} Feb 19 00:31:16 crc kubenswrapper[5108]: I0219 00:31:16.568136 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_0743f890-d6a1-4905-871b-d7bb01df4041/docker-build/0.log" Feb 19 00:31:16 crc kubenswrapper[5108]: I0219 00:31:16.569087 5108 generic.go:358] "Generic (PLEG): container finished" podID="0743f890-d6a1-4905-871b-d7bb01df4041" containerID="97efbc73aeada7b14dad657560925b87f536ae72f0d7f237a14a61d4d08b85be" exitCode=1 Feb 19 00:31:16 crc kubenswrapper[5108]: I0219 00:31:16.569253 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"0743f890-d6a1-4905-871b-d7bb01df4041","Type":"ContainerDied","Data":"97efbc73aeada7b14dad657560925b87f536ae72f0d7f237a14a61d4d08b85be"} Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.884027 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_0743f890-d6a1-4905-871b-d7bb01df4041/docker-build/0.log" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.884982 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975113 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-root\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975557 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-system-configs\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975620 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-run\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975659 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-push\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975745 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-pull\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975785 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-proxy-ca-bundles\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975803 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-ca-bundles\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975833 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-buildworkdir\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975886 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-buildcachedir\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.975980 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-node-pullsecrets\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.976089 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc7r2\" (UniqueName: \"kubernetes.io/projected/0743f890-d6a1-4905-871b-d7bb01df4041-kube-api-access-lc7r2\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.976127 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-build-blob-cache\") pod \"0743f890-d6a1-4905-871b-d7bb01df4041\" (UID: \"0743f890-d6a1-4905-871b-d7bb01df4041\") " Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.976743 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.977137 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.977202 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.977311 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.977734 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.977796 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.977886 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.977979 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.978260 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.986545 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0743f890-d6a1-4905-871b-d7bb01df4041-kube-api-access-lc7r2" (OuterVolumeSpecName: "kube-api-access-lc7r2") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "kube-api-access-lc7r2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.988208 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:31:17 crc kubenswrapper[5108]: I0219 00:31:17.992101 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "0743f890-d6a1-4905-871b-d7bb01df4041" (UID: "0743f890-d6a1-4905-871b-d7bb01df4041"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078225 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc7r2\" (UniqueName: \"kubernetes.io/projected/0743f890-d6a1-4905-871b-d7bb01df4041-kube-api-access-lc7r2\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078292 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078302 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078313 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078323 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078334 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078345 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/0743f890-d6a1-4905-871b-d7bb01df4041-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078355 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078363 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0743f890-d6a1-4905-871b-d7bb01df4041-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078375 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0743f890-d6a1-4905-871b-d7bb01df4041-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078386 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.078395 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0743f890-d6a1-4905-871b-d7bb01df4041-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.589086 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_0743f890-d6a1-4905-871b-d7bb01df4041/docker-build/0.log" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.589969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"0743f890-d6a1-4905-871b-d7bb01df4041","Type":"ContainerDied","Data":"24abc967843f90fa9ae670ff5055eada47b402f76cac93dfbe3c373618ded6bf"} Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.589989 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 19 00:31:18 crc kubenswrapper[5108]: I0219 00:31:18.590014 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24abc967843f90fa9ae670ff5055eada47b402f76cac93dfbe3c373618ded6bf" Feb 19 00:31:24 crc kubenswrapper[5108]: I0219 00:31:24.692507 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 19 00:31:24 crc kubenswrapper[5108]: I0219 00:31:24.703066 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 19 00:31:25 crc kubenswrapper[5108]: I0219 00:31:25.859324 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0743f890-d6a1-4905-871b-d7bb01df4041" path="/var/lib/kubelet/pods/0743f890-d6a1-4905-871b-d7bb01df4041/volumes" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.273081 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.274030 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0743f890-d6a1-4905-871b-d7bb01df4041" containerName="docker-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.274052 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0743f890-d6a1-4905-871b-d7bb01df4041" containerName="docker-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.274097 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0743f890-d6a1-4905-871b-d7bb01df4041" containerName="manage-dockerfile" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.274109 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0743f890-d6a1-4905-871b-d7bb01df4041" containerName="manage-dockerfile" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.274233 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0743f890-d6a1-4905-871b-d7bb01df4041" containerName="docker-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.287739 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.294574 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-global-ca\"" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.294625 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-ca\"" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.294709 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.294727 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-sys-config\"" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.296700 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.401714 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.402201 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2tmd\" (UniqueName: \"kubernetes.io/projected/13aa9202-8d00-4d06-ada7-42667d58c754-kube-api-access-t2tmd\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.402458 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.402625 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.402873 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.402998 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.403033 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.403070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.403098 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.403164 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.403300 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.403396 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.505173 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2tmd\" (UniqueName: \"kubernetes.io/projected/13aa9202-8d00-4d06-ada7-42667d58c754-kube-api-access-t2tmd\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.505289 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.505334 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.505384 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.505593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.505784 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.505877 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.505953 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.506022 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.506520 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.506843 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.507143 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.507379 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.507492 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.507763 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.508754 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.508796 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.508858 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.509108 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.509469 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.509884 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.514531 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.514721 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.538882 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2tmd\" (UniqueName: \"kubernetes.io/projected/13aa9202-8d00-4d06-ada7-42667d58c754-kube-api-access-t2tmd\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.609488 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:26 crc kubenswrapper[5108]: I0219 00:31:26.805962 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Feb 19 00:31:27 crc kubenswrapper[5108]: I0219 00:31:27.671086 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"13aa9202-8d00-4d06-ada7-42667d58c754","Type":"ContainerStarted","Data":"1e506638766066def23ca35d89f525d868c392542cb3beeb4a6d5a9ad4c71f94"} Feb 19 00:31:27 crc kubenswrapper[5108]: I0219 00:31:27.671158 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"13aa9202-8d00-4d06-ada7-42667d58c754","Type":"ContainerStarted","Data":"c02612dc6ba55ab998869a246336826a3b204b0feffe8d6593b0cffe8f743ebc"} Feb 19 00:31:28 crc kubenswrapper[5108]: I0219 00:31:28.681101 5108 generic.go:358] "Generic (PLEG): container finished" podID="13aa9202-8d00-4d06-ada7-42667d58c754" containerID="1e506638766066def23ca35d89f525d868c392542cb3beeb4a6d5a9ad4c71f94" exitCode=0 Feb 19 00:31:28 crc kubenswrapper[5108]: I0219 00:31:28.681250 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"13aa9202-8d00-4d06-ada7-42667d58c754","Type":"ContainerDied","Data":"1e506638766066def23ca35d89f525d868c392542cb3beeb4a6d5a9ad4c71f94"} Feb 19 00:31:29 crc kubenswrapper[5108]: I0219 00:31:29.697789 5108 generic.go:358] "Generic (PLEG): container finished" podID="13aa9202-8d00-4d06-ada7-42667d58c754" containerID="bb51f54aff2f6812b41282134d0373954a92a80e111e7a44cb6f388004b014a5" exitCode=0 Feb 19 00:31:29 crc kubenswrapper[5108]: I0219 00:31:29.697847 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"13aa9202-8d00-4d06-ada7-42667d58c754","Type":"ContainerDied","Data":"bb51f54aff2f6812b41282134d0373954a92a80e111e7a44cb6f388004b014a5"} Feb 19 00:31:29 crc kubenswrapper[5108]: I0219 00:31:29.746325 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_13aa9202-8d00-4d06-ada7-42667d58c754/manage-dockerfile/0.log" Feb 19 00:31:30 crc kubenswrapper[5108]: I0219 00:31:30.710109 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"13aa9202-8d00-4d06-ada7-42667d58c754","Type":"ContainerStarted","Data":"f7a60600986b8a87eefd9dd8752dd22d08bd4eb964f95d8c3e5190f99f9a1e2d"} Feb 19 00:31:33 crc kubenswrapper[5108]: I0219 00:31:33.742586 5108 generic.go:358] "Generic (PLEG): container finished" podID="13aa9202-8d00-4d06-ada7-42667d58c754" containerID="f7a60600986b8a87eefd9dd8752dd22d08bd4eb964f95d8c3e5190f99f9a1e2d" exitCode=0 Feb 19 00:31:33 crc kubenswrapper[5108]: I0219 00:31:33.742682 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"13aa9202-8d00-4d06-ada7-42667d58c754","Type":"ContainerDied","Data":"f7a60600986b8a87eefd9dd8752dd22d08bd4eb964f95d8c3e5190f99f9a1e2d"} Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.000130 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134602 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-build-blob-cache\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134653 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-buildcachedir\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134677 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-push\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134739 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-root\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134772 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-proxy-ca-bundles\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134796 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-pull\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134831 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-system-configs\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134828 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134894 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2tmd\" (UniqueName: \"kubernetes.io/projected/13aa9202-8d00-4d06-ada7-42667d58c754-kube-api-access-t2tmd\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134912 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-run\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134966 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-node-pullsecrets\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.134986 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-buildworkdir\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.135069 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-ca-bundles\") pod \"13aa9202-8d00-4d06-ada7-42667d58c754\" (UID: \"13aa9202-8d00-4d06-ada7-42667d58c754\") " Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.135194 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.135436 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.135476 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/13aa9202-8d00-4d06-ada7-42667d58c754-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.136793 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.137994 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.140793 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.140833 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.141412 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.145414 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.145919 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13aa9202-8d00-4d06-ada7-42667d58c754-kube-api-access-t2tmd" (OuterVolumeSpecName: "kube-api-access-t2tmd") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "kube-api-access-t2tmd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.147831 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.148294 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.154005 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "13aa9202-8d00-4d06-ada7-42667d58c754" (UID: "13aa9202-8d00-4d06-ada7-42667d58c754"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237323 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237386 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237404 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237423 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237442 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2tmd\" (UniqueName: \"kubernetes.io/projected/13aa9202-8d00-4d06-ada7-42667d58c754-kube-api-access-t2tmd\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237459 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237478 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237496 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13aa9202-8d00-4d06-ada7-42667d58c754-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237514 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/13aa9202-8d00-4d06-ada7-42667d58c754-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.237530 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/13aa9202-8d00-4d06-ada7-42667d58c754-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.762578 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.762622 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"13aa9202-8d00-4d06-ada7-42667d58c754","Type":"ContainerDied","Data":"c02612dc6ba55ab998869a246336826a3b204b0feffe8d6593b0cffe8f743ebc"} Feb 19 00:31:35 crc kubenswrapper[5108]: I0219 00:31:35.763209 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c02612dc6ba55ab998869a246336826a3b204b0feffe8d6593b0cffe8f743ebc" Feb 19 00:31:36 crc kubenswrapper[5108]: I0219 00:31:36.145079 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:31:36 crc kubenswrapper[5108]: I0219 00:31:36.145348 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.893689 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.894750 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="13aa9202-8d00-4d06-ada7-42667d58c754" containerName="git-clone" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.894765 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="13aa9202-8d00-4d06-ada7-42667d58c754" containerName="git-clone" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.894804 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="13aa9202-8d00-4d06-ada7-42667d58c754" containerName="manage-dockerfile" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.894815 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="13aa9202-8d00-4d06-ada7-42667d58c754" containerName="manage-dockerfile" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.894845 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="13aa9202-8d00-4d06-ada7-42667d58c754" containerName="docker-build" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.894857 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="13aa9202-8d00-4d06-ada7-42667d58c754" containerName="docker-build" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.895071 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="13aa9202-8d00-4d06-ada7-42667d58c754" containerName="docker-build" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.900137 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.903604 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-sys-config\"" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.904423 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.905012 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-ca\"" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.905397 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-global-ca\"" Feb 19 00:31:38 crc kubenswrapper[5108]: I0219 00:31:38.916355 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011174 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011253 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011329 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsswn\" (UniqueName: \"kubernetes.io/projected/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-kube-api-access-jsswn\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011389 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011430 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011633 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011754 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011805 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011838 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.011910 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.012067 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.012117 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.113701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.113804 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.113853 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114225 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114362 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114443 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114562 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114614 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114709 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114747 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jsswn\" (UniqueName: \"kubernetes.io/projected/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-kube-api-access-jsswn\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114881 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.114983 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.115046 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.115075 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.115271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.115365 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.115593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.116159 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.116691 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.116787 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.122874 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.123304 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.146121 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsswn\" (UniqueName: \"kubernetes.io/projected/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-kube-api-access-jsswn\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.222637 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.529290 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 19 00:31:39 crc kubenswrapper[5108]: I0219 00:31:39.796314 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"cbf90f81-e3d5-47f9-b849-bd6dde9523f7","Type":"ContainerStarted","Data":"14af9f918d1614da96f066475883dcadbc7b0f5be39968da15ba917f762cf4de"} Feb 19 00:31:40 crc kubenswrapper[5108]: I0219 00:31:40.810514 5108 generic.go:358] "Generic (PLEG): container finished" podID="cbf90f81-e3d5-47f9-b849-bd6dde9523f7" containerID="2b2b51a9a04733ead42d886b700eee4dae45306940a7b4f2a58cc76ce529df06" exitCode=0 Feb 19 00:31:40 crc kubenswrapper[5108]: I0219 00:31:40.810657 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"cbf90f81-e3d5-47f9-b849-bd6dde9523f7","Type":"ContainerDied","Data":"2b2b51a9a04733ead42d886b700eee4dae45306940a7b4f2a58cc76ce529df06"} Feb 19 00:31:41 crc kubenswrapper[5108]: I0219 00:31:41.827449 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_cbf90f81-e3d5-47f9-b849-bd6dde9523f7/docker-build/0.log" Feb 19 00:31:41 crc kubenswrapper[5108]: I0219 00:31:41.828389 5108 generic.go:358] "Generic (PLEG): container finished" podID="cbf90f81-e3d5-47f9-b849-bd6dde9523f7" containerID="923df79374461ac9706343f2dbc43239e0973ab29a3ca53c650b2466052edfe7" exitCode=1 Feb 19 00:31:41 crc kubenswrapper[5108]: I0219 00:31:41.828492 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"cbf90f81-e3d5-47f9-b849-bd6dde9523f7","Type":"ContainerDied","Data":"923df79374461ac9706343f2dbc43239e0973ab29a3ca53c650b2466052edfe7"} Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.122478 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_cbf90f81-e3d5-47f9-b849-bd6dde9523f7/docker-build/0.log" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.123449 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.201235 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildcachedir\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.201331 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.201409 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsswn\" (UniqueName: \"kubernetes.io/projected/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-kube-api-access-jsswn\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.201489 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildworkdir\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.201539 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-proxy-ca-bundles\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.201604 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-root\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.201673 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-push\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.201776 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-system-configs\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.202213 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-node-pullsecrets\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.202303 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-blob-cache\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.202319 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.202388 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-ca-bundles\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.202456 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-run\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.202730 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-pull\") pod \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\" (UID: \"cbf90f81-e3d5-47f9-b849-bd6dde9523f7\") " Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.202950 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.203081 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.203127 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.203377 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.203415 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.203434 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.203454 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.203471 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.203574 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.204057 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.205004 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.205337 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.209614 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.209852 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-kube-api-access-jsswn" (OuterVolumeSpecName: "kube-api-access-jsswn") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "kube-api-access-jsswn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.210259 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "cbf90f81-e3d5-47f9-b849-bd6dde9523f7" (UID: "cbf90f81-e3d5-47f9-b849-bd6dde9523f7"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.304664 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.304705 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jsswn\" (UniqueName: \"kubernetes.io/projected/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-kube-api-access-jsswn\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.304718 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.304730 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.304742 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.304754 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.304767 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf90f81-e3d5-47f9-b849-bd6dde9523f7-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.850171 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_cbf90f81-e3d5-47f9-b849-bd6dde9523f7/docker-build/0.log" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.851084 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.862437 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"cbf90f81-e3d5-47f9-b849-bd6dde9523f7","Type":"ContainerDied","Data":"14af9f918d1614da96f066475883dcadbc7b0f5be39968da15ba917f762cf4de"} Feb 19 00:31:43 crc kubenswrapper[5108]: I0219 00:31:43.862480 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14af9f918d1614da96f066475883dcadbc7b0f5be39968da15ba917f762cf4de" Feb 19 00:31:49 crc kubenswrapper[5108]: I0219 00:31:49.368359 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 19 00:31:49 crc kubenswrapper[5108]: I0219 00:31:49.379360 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 19 00:31:49 crc kubenswrapper[5108]: I0219 00:31:49.862930 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf90f81-e3d5-47f9-b849-bd6dde9523f7" path="/var/lib/kubelet/pods/cbf90f81-e3d5-47f9-b849-bd6dde9523f7/volumes" Feb 19 00:31:50 crc kubenswrapper[5108]: I0219 00:31:50.991247 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Feb 19 00:31:50 crc kubenswrapper[5108]: I0219 00:31:50.994176 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbf90f81-e3d5-47f9-b849-bd6dde9523f7" containerName="manage-dockerfile" Feb 19 00:31:50 crc kubenswrapper[5108]: I0219 00:31:50.994494 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf90f81-e3d5-47f9-b849-bd6dde9523f7" containerName="manage-dockerfile" Feb 19 00:31:50 crc kubenswrapper[5108]: I0219 00:31:50.994765 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbf90f81-e3d5-47f9-b849-bd6dde9523f7" containerName="docker-build" Feb 19 00:31:50 crc kubenswrapper[5108]: I0219 00:31:50.995171 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf90f81-e3d5-47f9-b849-bd6dde9523f7" containerName="docker-build" Feb 19 00:31:50 crc kubenswrapper[5108]: I0219 00:31:50.995515 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbf90f81-e3d5-47f9-b849-bd6dde9523f7" containerName="docker-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.002122 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.006678 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-ca\"" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.006813 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-sys-config\"" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.006980 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-global-ca\"" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.008724 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.014031 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.129843 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.129925 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.129961 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.129994 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.130020 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.130052 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.130074 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rn2t\" (UniqueName: \"kubernetes.io/projected/5f035009-e186-46df-a9f4-2115bf667ddd-kube-api-access-7rn2t\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.130120 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.130373 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.130464 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.130615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.130735 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.232715 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233157 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.232961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233231 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233488 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233545 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233576 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233601 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233775 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233828 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233915 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.233954 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.234006 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7rn2t\" (UniqueName: \"kubernetes.io/projected/5f035009-e186-46df-a9f4-2115bf667ddd-kube-api-access-7rn2t\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.234085 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.234422 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.234418 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.234646 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.235115 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.235352 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.235857 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.235999 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.244653 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.249798 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.263505 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rn2t\" (UniqueName: \"kubernetes.io/projected/5f035009-e186-46df-a9f4-2115bf667ddd-kube-api-access-7rn2t\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.333083 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.862562 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Feb 19 00:31:51 crc kubenswrapper[5108]: I0219 00:31:51.917361 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"5f035009-e186-46df-a9f4-2115bf667ddd","Type":"ContainerStarted","Data":"d21ed1817a735b05c28026954850e4b4add090e86f3b95b50e32fd0d11733c4e"} Feb 19 00:31:52 crc kubenswrapper[5108]: I0219 00:31:52.926176 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"5f035009-e186-46df-a9f4-2115bf667ddd","Type":"ContainerStarted","Data":"6d9d49c6aa0a3de23be9f5642a1ac74b44876538118d48f104d7892938aa5a5b"} Feb 19 00:31:53 crc kubenswrapper[5108]: I0219 00:31:53.937337 5108 generic.go:358] "Generic (PLEG): container finished" podID="5f035009-e186-46df-a9f4-2115bf667ddd" containerID="6d9d49c6aa0a3de23be9f5642a1ac74b44876538118d48f104d7892938aa5a5b" exitCode=0 Feb 19 00:31:53 crc kubenswrapper[5108]: I0219 00:31:53.937743 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"5f035009-e186-46df-a9f4-2115bf667ddd","Type":"ContainerDied","Data":"6d9d49c6aa0a3de23be9f5642a1ac74b44876538118d48f104d7892938aa5a5b"} Feb 19 00:31:54 crc kubenswrapper[5108]: I0219 00:31:54.946492 5108 generic.go:358] "Generic (PLEG): container finished" podID="5f035009-e186-46df-a9f4-2115bf667ddd" containerID="765df415895cfd4e01752d4572ed6c8df602bf7e43d6e9d12043da4ff0a8046b" exitCode=0 Feb 19 00:31:54 crc kubenswrapper[5108]: I0219 00:31:54.946549 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"5f035009-e186-46df-a9f4-2115bf667ddd","Type":"ContainerDied","Data":"765df415895cfd4e01752d4572ed6c8df602bf7e43d6e9d12043da4ff0a8046b"} Feb 19 00:31:54 crc kubenswrapper[5108]: I0219 00:31:54.977380 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_5f035009-e186-46df-a9f4-2115bf667ddd/manage-dockerfile/0.log" Feb 19 00:31:55 crc kubenswrapper[5108]: I0219 00:31:55.956558 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"5f035009-e186-46df-a9f4-2115bf667ddd","Type":"ContainerStarted","Data":"478bd67cbc0b2f9ea353e036a8c7b198b29ebbaa3c3280b36df8e72cc1bc5d03"} Feb 19 00:31:55 crc kubenswrapper[5108]: I0219 00:31:55.985044 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.985023864 podStartE2EDuration="5.985023864s" podCreationTimestamp="2026-02-19 00:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:31:55.976722807 +0000 UTC m=+1374.943369125" watchObservedRunningTime="2026-02-19 00:31:55.985023864 +0000 UTC m=+1374.951670172" Feb 19 00:31:59 crc kubenswrapper[5108]: I0219 00:31:59.997379 5108 generic.go:358] "Generic (PLEG): container finished" podID="5f035009-e186-46df-a9f4-2115bf667ddd" containerID="478bd67cbc0b2f9ea353e036a8c7b198b29ebbaa3c3280b36df8e72cc1bc5d03" exitCode=0 Feb 19 00:31:59 crc kubenswrapper[5108]: I0219 00:31:59.998372 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"5f035009-e186-46df-a9f4-2115bf667ddd","Type":"ContainerDied","Data":"478bd67cbc0b2f9ea353e036a8c7b198b29ebbaa3c3280b36df8e72cc1bc5d03"} Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.156679 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524352-ghgqx"] Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.169144 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524352-ghgqx"] Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.169344 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.176310 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.176441 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.176812 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.241209 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87hf8\" (UniqueName: \"kubernetes.io/projected/c7cb93e4-af2a-4449-8999-ecf6da709e25-kube-api-access-87hf8\") pod \"auto-csr-approver-29524352-ghgqx\" (UID: \"c7cb93e4-af2a-4449-8999-ecf6da709e25\") " pod="openshift-infra/auto-csr-approver-29524352-ghgqx" Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.342326 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-87hf8\" (UniqueName: \"kubernetes.io/projected/c7cb93e4-af2a-4449-8999-ecf6da709e25-kube-api-access-87hf8\") pod \"auto-csr-approver-29524352-ghgqx\" (UID: \"c7cb93e4-af2a-4449-8999-ecf6da709e25\") " pod="openshift-infra/auto-csr-approver-29524352-ghgqx" Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.366420 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-87hf8\" (UniqueName: \"kubernetes.io/projected/c7cb93e4-af2a-4449-8999-ecf6da709e25-kube-api-access-87hf8\") pod \"auto-csr-approver-29524352-ghgqx\" (UID: \"c7cb93e4-af2a-4449-8999-ecf6da709e25\") " pod="openshift-infra/auto-csr-approver-29524352-ghgqx" Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.495842 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" Feb 19 00:32:00 crc kubenswrapper[5108]: I0219 00:32:00.742038 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524352-ghgqx"] Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.006972 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" event={"ID":"c7cb93e4-af2a-4449-8999-ecf6da709e25","Type":"ContainerStarted","Data":"ddc0316b9abade8ffbb5991b52480bb80e593c4a588816303e44f6b6506ebc30"} Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.243041 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.359405 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rn2t\" (UniqueName: \"kubernetes.io/projected/5f035009-e186-46df-a9f4-2115bf667ddd-kube-api-access-7rn2t\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.359572 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-build-blob-cache\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.359660 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-run\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.361304 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.361417 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-ca-bundles\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.361547 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-root\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.361628 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-node-pullsecrets\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.361670 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-pull\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.361764 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-push\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.361817 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-proxy-ca-bundles\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.362006 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.362489 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.362770 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-buildworkdir\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.363113 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.363201 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.363432 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-buildcachedir\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.363532 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-system-configs\") pod \"5f035009-e186-46df-a9f4-2115bf667ddd\" (UID: \"5f035009-e186-46df-a9f4-2115bf667ddd\") " Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.364355 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.364448 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.364522 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.364552 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.364576 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.364604 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.364627 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.364654 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.365214 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.368857 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f035009-e186-46df-a9f4-2115bf667ddd-kube-api-access-7rn2t" (OuterVolumeSpecName: "kube-api-access-7rn2t") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "kube-api-access-7rn2t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.369100 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.370006 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.370870 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "5f035009-e186-46df-a9f4-2115bf667ddd" (UID: "5f035009-e186-46df-a9f4-2115bf667ddd"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.465828 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f035009-e186-46df-a9f4-2115bf667ddd-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.465880 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.465898 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/5f035009-e186-46df-a9f4-2115bf667ddd-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.465914 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f035009-e186-46df-a9f4-2115bf667ddd-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.465955 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f035009-e186-46df-a9f4-2115bf667ddd-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:01 crc kubenswrapper[5108]: I0219 00:32:01.465968 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7rn2t\" (UniqueName: \"kubernetes.io/projected/5f035009-e186-46df-a9f4-2115bf667ddd-kube-api-access-7rn2t\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:02 crc kubenswrapper[5108]: I0219 00:32:02.021761 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" event={"ID":"c7cb93e4-af2a-4449-8999-ecf6da709e25","Type":"ContainerStarted","Data":"8526ebedc4d904e0f590dc14b68c6af204033a4a895f2a5589d7d438e457f723"} Feb 19 00:32:02 crc kubenswrapper[5108]: I0219 00:32:02.029623 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 19 00:32:02 crc kubenswrapper[5108]: I0219 00:32:02.029641 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"5f035009-e186-46df-a9f4-2115bf667ddd","Type":"ContainerDied","Data":"d21ed1817a735b05c28026954850e4b4add090e86f3b95b50e32fd0d11733c4e"} Feb 19 00:32:02 crc kubenswrapper[5108]: I0219 00:32:02.029679 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21ed1817a735b05c28026954850e4b4add090e86f3b95b50e32fd0d11733c4e" Feb 19 00:32:02 crc kubenswrapper[5108]: I0219 00:32:02.039013 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" podStartSLOduration=1.161939151 podStartE2EDuration="2.038926677s" podCreationTimestamp="2026-02-19 00:32:00 +0000 UTC" firstStartedPulling="2026-02-19 00:32:00.743331832 +0000 UTC m=+1379.709978130" lastFinishedPulling="2026-02-19 00:32:01.620319348 +0000 UTC m=+1380.586965656" observedRunningTime="2026-02-19 00:32:02.03574258 +0000 UTC m=+1381.002388948" watchObservedRunningTime="2026-02-19 00:32:02.038926677 +0000 UTC m=+1381.005573025" Feb 19 00:32:03 crc kubenswrapper[5108]: I0219 00:32:03.037592 5108 generic.go:358] "Generic (PLEG): container finished" podID="c7cb93e4-af2a-4449-8999-ecf6da709e25" containerID="8526ebedc4d904e0f590dc14b68c6af204033a4a895f2a5589d7d438e457f723" exitCode=0 Feb 19 00:32:03 crc kubenswrapper[5108]: I0219 00:32:03.037749 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" event={"ID":"c7cb93e4-af2a-4449-8999-ecf6da709e25","Type":"ContainerDied","Data":"8526ebedc4d904e0f590dc14b68c6af204033a4a895f2a5589d7d438e457f723"} Feb 19 00:32:04 crc kubenswrapper[5108]: I0219 00:32:04.347442 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" Feb 19 00:32:04 crc kubenswrapper[5108]: I0219 00:32:04.410886 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87hf8\" (UniqueName: \"kubernetes.io/projected/c7cb93e4-af2a-4449-8999-ecf6da709e25-kube-api-access-87hf8\") pod \"c7cb93e4-af2a-4449-8999-ecf6da709e25\" (UID: \"c7cb93e4-af2a-4449-8999-ecf6da709e25\") " Feb 19 00:32:04 crc kubenswrapper[5108]: I0219 00:32:04.421391 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cb93e4-af2a-4449-8999-ecf6da709e25-kube-api-access-87hf8" (OuterVolumeSpecName: "kube-api-access-87hf8") pod "c7cb93e4-af2a-4449-8999-ecf6da709e25" (UID: "c7cb93e4-af2a-4449-8999-ecf6da709e25"). InnerVolumeSpecName "kube-api-access-87hf8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:32:04 crc kubenswrapper[5108]: I0219 00:32:04.512311 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-87hf8\" (UniqueName: \"kubernetes.io/projected/c7cb93e4-af2a-4449-8999-ecf6da709e25-kube-api-access-87hf8\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:04 crc kubenswrapper[5108]: I0219 00:32:04.930241 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524346-ffh9q"] Feb 19 00:32:04 crc kubenswrapper[5108]: I0219 00:32:04.938566 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524346-ffh9q"] Feb 19 00:32:05 crc kubenswrapper[5108]: I0219 00:32:05.067010 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" event={"ID":"c7cb93e4-af2a-4449-8999-ecf6da709e25","Type":"ContainerDied","Data":"ddc0316b9abade8ffbb5991b52480bb80e593c4a588816303e44f6b6506ebc30"} Feb 19 00:32:05 crc kubenswrapper[5108]: I0219 00:32:05.067098 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc0316b9abade8ffbb5991b52480bb80e593c4a588816303e44f6b6506ebc30" Feb 19 00:32:05 crc kubenswrapper[5108]: I0219 00:32:05.067034 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524352-ghgqx" Feb 19 00:32:05 crc kubenswrapper[5108]: I0219 00:32:05.860226 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b6c67fa-13e6-4a1c-b520-6dbc388c1d85" path="/var/lib/kubelet/pods/1b6c67fa-13e6-4a1c-b520-6dbc388c1d85/volumes" Feb 19 00:32:06 crc kubenswrapper[5108]: I0219 00:32:06.144928 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:32:06 crc kubenswrapper[5108]: I0219 00:32:06.145364 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:32:06 crc kubenswrapper[5108]: I0219 00:32:06.145504 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:32:06 crc kubenswrapper[5108]: I0219 00:32:06.146324 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d38f558a933051f6d4612f6c63794db418d969c28d49c059a3a7b5256e907c6f"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:32:06 crc kubenswrapper[5108]: I0219 00:32:06.146515 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://d38f558a933051f6d4612f6c63794db418d969c28d49c059a3a7b5256e907c6f" gracePeriod=600 Feb 19 00:32:07 crc kubenswrapper[5108]: I0219 00:32:07.087703 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="d38f558a933051f6d4612f6c63794db418d969c28d49c059a3a7b5256e907c6f" exitCode=0 Feb 19 00:32:07 crc kubenswrapper[5108]: I0219 00:32:07.087754 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"d38f558a933051f6d4612f6c63794db418d969c28d49c059a3a7b5256e907c6f"} Feb 19 00:32:07 crc kubenswrapper[5108]: I0219 00:32:07.088513 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4"} Feb 19 00:32:07 crc kubenswrapper[5108]: I0219 00:32:07.088550 5108 scope.go:117] "RemoveContainer" containerID="2d81d337fd772fc475aa1e34f1691df7c5878b03eaa535cbcff5e87cd3b6dc50" Feb 19 00:32:10 crc kubenswrapper[5108]: I0219 00:32:10.202486 5108 scope.go:117] "RemoveContainer" containerID="735be55f6729731671c9ee4037391307f935b575efd0b13cf606df14a0c6ca78" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.894781 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896321 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f035009-e186-46df-a9f4-2115bf667ddd" containerName="docker-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896344 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f035009-e186-46df-a9f4-2115bf667ddd" containerName="docker-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896363 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f035009-e186-46df-a9f4-2115bf667ddd" containerName="manage-dockerfile" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896374 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f035009-e186-46df-a9f4-2115bf667ddd" containerName="manage-dockerfile" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896390 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7cb93e4-af2a-4449-8999-ecf6da709e25" containerName="oc" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896397 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cb93e4-af2a-4449-8999-ecf6da709e25" containerName="oc" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896415 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f035009-e186-46df-a9f4-2115bf667ddd" containerName="git-clone" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896424 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f035009-e186-46df-a9f4-2115bf667ddd" containerName="git-clone" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896563 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7cb93e4-af2a-4449-8999-ecf6da709e25" containerName="oc" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.896581 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="5f035009-e186-46df-a9f4-2115bf667ddd" containerName="docker-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.906863 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.910213 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.910460 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.910660 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-2hd4q\"" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.910828 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.911017 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913495 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913557 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913594 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913684 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913737 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmtxg\" (UniqueName: \"kubernetes.io/projected/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-kube-api-access-bmtxg\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913839 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913872 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.913964 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.914002 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.914122 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.914157 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.915086 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:17 crc kubenswrapper[5108]: I0219 00:32:17.916733 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.016671 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.016749 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.016831 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.016864 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.016903 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017002 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017043 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017078 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017137 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017636 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bmtxg\" (UniqueName: \"kubernetes.io/projected/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-kube-api-access-bmtxg\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017794 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017798 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.018115 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.017981 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.018257 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.018345 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.018400 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.018715 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.019003 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.019651 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.025073 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.026373 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.027519 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.046652 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmtxg\" (UniqueName: \"kubernetes.io/projected/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-kube-api-access-bmtxg\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.242689 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:18 crc kubenswrapper[5108]: I0219 00:32:18.735769 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:32:19 crc kubenswrapper[5108]: I0219 00:32:19.212313 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9b123c1b-1175-4dfb-ba36-ed703e1ebfab","Type":"ContainerStarted","Data":"2e3935edb8b97f343a265b47fc8179c8dc4e65097e6c76e56d06d75d47ae4b16"} Feb 19 00:32:19 crc kubenswrapper[5108]: I0219 00:32:19.212358 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9b123c1b-1175-4dfb-ba36-ed703e1ebfab","Type":"ContainerStarted","Data":"ba02b836d341ffb77f17d1ae690b6862088df693e6a4ccce588e6a6c90ede5fd"} Feb 19 00:32:20 crc kubenswrapper[5108]: I0219 00:32:20.225283 5108 generic.go:358] "Generic (PLEG): container finished" podID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerID="2e3935edb8b97f343a265b47fc8179c8dc4e65097e6c76e56d06d75d47ae4b16" exitCode=0 Feb 19 00:32:20 crc kubenswrapper[5108]: I0219 00:32:20.225387 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9b123c1b-1175-4dfb-ba36-ed703e1ebfab","Type":"ContainerDied","Data":"2e3935edb8b97f343a265b47fc8179c8dc4e65097e6c76e56d06d75d47ae4b16"} Feb 19 00:32:21 crc kubenswrapper[5108]: I0219 00:32:21.238990 5108 generic.go:358] "Generic (PLEG): container finished" podID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerID="cbc29770b11bb395af518ce551b0cea1c67b4446ef8fdd3c4b0cf9546172f4ec" exitCode=0 Feb 19 00:32:21 crc kubenswrapper[5108]: I0219 00:32:21.239057 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9b123c1b-1175-4dfb-ba36-ed703e1ebfab","Type":"ContainerDied","Data":"cbc29770b11bb395af518ce551b0cea1c67b4446ef8fdd3c4b0cf9546172f4ec"} Feb 19 00:32:21 crc kubenswrapper[5108]: I0219 00:32:21.275136 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_9b123c1b-1175-4dfb-ba36-ed703e1ebfab/manage-dockerfile/0.log" Feb 19 00:32:22 crc kubenswrapper[5108]: I0219 00:32:22.250633 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9b123c1b-1175-4dfb-ba36-ed703e1ebfab","Type":"ContainerStarted","Data":"e8d816116ba0869dd4d144301c54af7ae1fd770f150abb7cdd0e5427ed592326"} Feb 19 00:32:22 crc kubenswrapper[5108]: I0219 00:32:22.274142 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=5.274119296 podStartE2EDuration="5.274119296s" podCreationTimestamp="2026-02-19 00:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:32:22.271389941 +0000 UTC m=+1401.238036299" watchObservedRunningTime="2026-02-19 00:32:22.274119296 +0000 UTC m=+1401.240765644" Feb 19 00:32:52 crc kubenswrapper[5108]: I0219 00:32:52.483140 5108 generic.go:358] "Generic (PLEG): container finished" podID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerID="e8d816116ba0869dd4d144301c54af7ae1fd770f150abb7cdd0e5427ed592326" exitCode=0 Feb 19 00:32:52 crc kubenswrapper[5108]: I0219 00:32:52.483204 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9b123c1b-1175-4dfb-ba36-ed703e1ebfab","Type":"ContainerDied","Data":"e8d816116ba0869dd4d144301c54af7ae1fd770f150abb7cdd0e5427ed592326"} Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.842614 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956226 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-node-pullsecrets\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956328 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-push\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956420 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-proxy-ca-bundles\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956473 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956535 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-blob-cache\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956565 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-ca-bundles\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956628 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmtxg\" (UniqueName: \"kubernetes.io/projected/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-kube-api-access-bmtxg\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956681 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildworkdir\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956768 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-root\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956788 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-run\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956828 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-system-configs\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956855 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-pull\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.956919 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildcachedir\") pod \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\" (UID: \"9b123c1b-1175-4dfb-ba36-ed703e1ebfab\") " Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.957244 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.957297 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.958490 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.958519 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.958649 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.959180 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.959875 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.962953 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-pull" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-pull") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "builder-dockercfg-2hd4q-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.963155 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-push" (OuterVolumeSpecName: "builder-dockercfg-2hd4q-push") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "builder-dockercfg-2hd4q-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.963292 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-kube-api-access-bmtxg" (OuterVolumeSpecName: "kube-api-access-bmtxg") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "kube-api-access-bmtxg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:32:53 crc kubenswrapper[5108]: I0219 00:32:53.963546 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058337 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bmtxg\" (UniqueName: \"kubernetes.io/projected/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-kube-api-access-bmtxg\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058376 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058390 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058402 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058414 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-pull\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058457 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058469 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058480 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-2hd4q-push\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-builder-dockercfg-2hd4q-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058490 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058501 5108 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.058514 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.504680 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.504678 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9b123c1b-1175-4dfb-ba36-ed703e1ebfab","Type":"ContainerDied","Data":"ba02b836d341ffb77f17d1ae690b6862088df693e6a4ccce588e6a6c90ede5fd"} Feb 19 00:32:54 crc kubenswrapper[5108]: I0219 00:32:54.504857 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba02b836d341ffb77f17d1ae690b6862088df693e6a4ccce588e6a6c90ede5fd" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.493690 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-jbk4s"] Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.495292 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerName="git-clone" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.495382 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerName="git-clone" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.495462 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerName="manage-dockerfile" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.495524 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerName="manage-dockerfile" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.495582 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerName="docker-build" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.495637 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerName="docker-build" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.495787 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="9b123c1b-1175-4dfb-ba36-ed703e1ebfab" containerName="docker-build" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.743879 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.754551 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.834753 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jbk4s"] Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.834911 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.838991 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-4dhl4\"" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.855669 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfc2\" (UniqueName: \"kubernetes.io/projected/63411ded-69e9-4e12-b88f-fc4391ab7fae-kube-api-access-pvfc2\") pod \"infrawatch-operators-jbk4s\" (UID: \"63411ded-69e9-4e12-b88f-fc4391ab7fae\") " pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.957507 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pvfc2\" (UniqueName: \"kubernetes.io/projected/63411ded-69e9-4e12-b88f-fc4391ab7fae-kube-api-access-pvfc2\") pod \"infrawatch-operators-jbk4s\" (UID: \"63411ded-69e9-4e12-b88f-fc4391ab7fae\") " pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:32:57 crc kubenswrapper[5108]: I0219 00:32:57.978478 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvfc2\" (UniqueName: \"kubernetes.io/projected/63411ded-69e9-4e12-b88f-fc4391ab7fae-kube-api-access-pvfc2\") pod \"infrawatch-operators-jbk4s\" (UID: \"63411ded-69e9-4e12-b88f-fc4391ab7fae\") " pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:32:58 crc kubenswrapper[5108]: I0219 00:32:58.180965 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:32:58 crc kubenswrapper[5108]: I0219 00:32:58.482733 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jbk4s"] Feb 19 00:32:58 crc kubenswrapper[5108]: I0219 00:32:58.541718 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jbk4s" event={"ID":"63411ded-69e9-4e12-b88f-fc4391ab7fae","Type":"ContainerStarted","Data":"f07bd1c55b7a678ff387210672280c8c55e8412c72b4ed8ad8595945fd57d05f"} Feb 19 00:32:58 crc kubenswrapper[5108]: I0219 00:32:58.618317 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9b123c1b-1175-4dfb-ba36-ed703e1ebfab" (UID: "9b123c1b-1175-4dfb-ba36-ed703e1ebfab"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:32:58 crc kubenswrapper[5108]: I0219 00:32:58.671670 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9b123c1b-1175-4dfb-ba36-ed703e1ebfab-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:10 crc kubenswrapper[5108]: I0219 00:33:10.643656 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jbk4s" event={"ID":"63411ded-69e9-4e12-b88f-fc4391ab7fae","Type":"ContainerStarted","Data":"3a5364cddf03101b5651d488975377956b925189a29747d08555fe2f891496e5"} Feb 19 00:33:10 crc kubenswrapper[5108]: I0219 00:33:10.671514 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-jbk4s" podStartSLOduration=2.257453908 podStartE2EDuration="13.671481275s" podCreationTimestamp="2026-02-19 00:32:57 +0000 UTC" firstStartedPulling="2026-02-19 00:32:58.494445687 +0000 UTC m=+1437.461092005" lastFinishedPulling="2026-02-19 00:33:09.908473064 +0000 UTC m=+1448.875119372" observedRunningTime="2026-02-19 00:33:10.662907441 +0000 UTC m=+1449.629553779" watchObservedRunningTime="2026-02-19 00:33:10.671481275 +0000 UTC m=+1449.638127623" Feb 19 00:33:18 crc kubenswrapper[5108]: I0219 00:33:18.182199 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:33:18 crc kubenswrapper[5108]: I0219 00:33:18.184003 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:33:18 crc kubenswrapper[5108]: I0219 00:33:18.241811 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:33:18 crc kubenswrapper[5108]: I0219 00:33:18.748839 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-jbk4s" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.549577 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz"] Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.560618 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz"] Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.560830 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.597474 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99fwl\" (UniqueName: \"kubernetes.io/projected/3cf4bff2-0bb0-4309-a36b-83a7f0829656-kube-api-access-99fwl\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.597671 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.597793 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.699907 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-99fwl\" (UniqueName: \"kubernetes.io/projected/3cf4bff2-0bb0-4309-a36b-83a7f0829656-kube-api-access-99fwl\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.700089 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.700163 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.701145 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.701387 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.741919 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-99fwl\" (UniqueName: \"kubernetes.io/projected/3cf4bff2-0bb0-4309-a36b-83a7f0829656-kube-api-access-99fwl\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:20 crc kubenswrapper[5108]: I0219 00:33:20.885563 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.319015 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz"] Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.353649 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz"] Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.362491 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.372537 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz"] Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.409341 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.409415 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffn8\" (UniqueName: \"kubernetes.io/projected/bedb826e-31a6-4ec0-a4ec-257719a18040-kube-api-access-gffn8\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.409471 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.510786 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.511143 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gffn8\" (UniqueName: \"kubernetes.io/projected/bedb826e-31a6-4ec0-a4ec-257719a18040-kube-api-access-gffn8\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.511173 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.511332 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.511545 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.534981 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffn8\" (UniqueName: \"kubernetes.io/projected/bedb826e-31a6-4ec0-a4ec-257719a18040-kube-api-access-gffn8\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.737829 5108 generic.go:358] "Generic (PLEG): container finished" podID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerID="ebc0aa8e27906a5304cd310ea0eebfe1c1ef415207233f7a031361072d93ff93" exitCode=0 Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.737902 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" event={"ID":"3cf4bff2-0bb0-4309-a36b-83a7f0829656","Type":"ContainerDied","Data":"ebc0aa8e27906a5304cd310ea0eebfe1c1ef415207233f7a031361072d93ff93"} Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.737952 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" event={"ID":"3cf4bff2-0bb0-4309-a36b-83a7f0829656","Type":"ContainerStarted","Data":"78e738cd0f5a61b350b01e66cf11bf430bf1dc38eacd84e94313c6254caf6f9f"} Feb 19 00:33:21 crc kubenswrapper[5108]: I0219 00:33:21.759765 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:22 crc kubenswrapper[5108]: I0219 00:33:22.214462 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz"] Feb 19 00:33:22 crc kubenswrapper[5108]: W0219 00:33:22.222189 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbedb826e_31a6_4ec0_a4ec_257719a18040.slice/crio-6a166e7f09f1226701ed4d0995ac801cc85850459312e34cced600677e3ab752 WatchSource:0}: Error finding container 6a166e7f09f1226701ed4d0995ac801cc85850459312e34cced600677e3ab752: Status 404 returned error can't find the container with id 6a166e7f09f1226701ed4d0995ac801cc85850459312e34cced600677e3ab752 Feb 19 00:33:22 crc kubenswrapper[5108]: I0219 00:33:22.749965 5108 generic.go:358] "Generic (PLEG): container finished" podID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerID="008fe7231ca3e4a36b89838dc9fd3d605f8acde857321f94e14b99b9f068a8d8" exitCode=0 Feb 19 00:33:22 crc kubenswrapper[5108]: I0219 00:33:22.750034 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" event={"ID":"3cf4bff2-0bb0-4309-a36b-83a7f0829656","Type":"ContainerDied","Data":"008fe7231ca3e4a36b89838dc9fd3d605f8acde857321f94e14b99b9f068a8d8"} Feb 19 00:33:22 crc kubenswrapper[5108]: I0219 00:33:22.754493 5108 generic.go:358] "Generic (PLEG): container finished" podID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerID="ae8ca72ea0d60bce2ccec33cb46bb6766ddad75d580cf19fb82873826d308dc6" exitCode=0 Feb 19 00:33:22 crc kubenswrapper[5108]: I0219 00:33:22.754638 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" event={"ID":"bedb826e-31a6-4ec0-a4ec-257719a18040","Type":"ContainerDied","Data":"ae8ca72ea0d60bce2ccec33cb46bb6766ddad75d580cf19fb82873826d308dc6"} Feb 19 00:33:22 crc kubenswrapper[5108]: I0219 00:33:22.754675 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" event={"ID":"bedb826e-31a6-4ec0-a4ec-257719a18040","Type":"ContainerStarted","Data":"6a166e7f09f1226701ed4d0995ac801cc85850459312e34cced600677e3ab752"} Feb 19 00:33:23 crc kubenswrapper[5108]: I0219 00:33:23.767368 5108 generic.go:358] "Generic (PLEG): container finished" podID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerID="6c05c6814bb364c88f1366c45b1e59e1ce64ff188d4f92491ef7e73487493b9a" exitCode=0 Feb 19 00:33:23 crc kubenswrapper[5108]: I0219 00:33:23.768119 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" event={"ID":"3cf4bff2-0bb0-4309-a36b-83a7f0829656","Type":"ContainerDied","Data":"6c05c6814bb364c88f1366c45b1e59e1ce64ff188d4f92491ef7e73487493b9a"} Feb 19 00:33:23 crc kubenswrapper[5108]: I0219 00:33:23.772227 5108 generic.go:358] "Generic (PLEG): container finished" podID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerID="c30de62d0fb9ae69d9be9df58e3867de4b93df581e0f9248f68fbafc0da9bf14" exitCode=0 Feb 19 00:33:23 crc kubenswrapper[5108]: I0219 00:33:23.772339 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" event={"ID":"bedb826e-31a6-4ec0-a4ec-257719a18040","Type":"ContainerDied","Data":"c30de62d0fb9ae69d9be9df58e3867de4b93df581e0f9248f68fbafc0da9bf14"} Feb 19 00:33:24 crc kubenswrapper[5108]: I0219 00:33:24.784857 5108 generic.go:358] "Generic (PLEG): container finished" podID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerID="3c4975fa8af43d103f5fd56baed37cdc9bdeaa6ca2425feee93164882d70c6f0" exitCode=0 Feb 19 00:33:24 crc kubenswrapper[5108]: I0219 00:33:24.785003 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" event={"ID":"bedb826e-31a6-4ec0-a4ec-257719a18040","Type":"ContainerDied","Data":"3c4975fa8af43d103f5fd56baed37cdc9bdeaa6ca2425feee93164882d70c6f0"} Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.031120 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.061646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-bundle\") pod \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.061829 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-util\") pod \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.062403 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99fwl\" (UniqueName: \"kubernetes.io/projected/3cf4bff2-0bb0-4309-a36b-83a7f0829656-kube-api-access-99fwl\") pod \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\" (UID: \"3cf4bff2-0bb0-4309-a36b-83a7f0829656\") " Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.065343 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-bundle" (OuterVolumeSpecName: "bundle") pod "3cf4bff2-0bb0-4309-a36b-83a7f0829656" (UID: "3cf4bff2-0bb0-4309-a36b-83a7f0829656"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.069093 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf4bff2-0bb0-4309-a36b-83a7f0829656-kube-api-access-99fwl" (OuterVolumeSpecName: "kube-api-access-99fwl") pod "3cf4bff2-0bb0-4309-a36b-83a7f0829656" (UID: "3cf4bff2-0bb0-4309-a36b-83a7f0829656"). InnerVolumeSpecName "kube-api-access-99fwl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.086604 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-util" (OuterVolumeSpecName: "util") pod "3cf4bff2-0bb0-4309-a36b-83a7f0829656" (UID: "3cf4bff2-0bb0-4309-a36b-83a7f0829656"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.164449 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99fwl\" (UniqueName: \"kubernetes.io/projected/3cf4bff2-0bb0-4309-a36b-83a7f0829656-kube-api-access-99fwl\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.164544 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.164565 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf4bff2-0bb0-4309-a36b-83a7f0829656-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.794553 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" event={"ID":"3cf4bff2-0bb0-4309-a36b-83a7f0829656","Type":"ContainerDied","Data":"78e738cd0f5a61b350b01e66cf11bf430bf1dc38eacd84e94313c6254caf6f9f"} Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.794841 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78e738cd0f5a61b350b01e66cf11bf430bf1dc38eacd84e94313c6254caf6f9f" Feb 19 00:33:25 crc kubenswrapper[5108]: I0219 00:33:25.794609 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4bxhz" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.102031 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.180474 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-util\") pod \"bedb826e-31a6-4ec0-a4ec-257719a18040\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.180658 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gffn8\" (UniqueName: \"kubernetes.io/projected/bedb826e-31a6-4ec0-a4ec-257719a18040-kube-api-access-gffn8\") pod \"bedb826e-31a6-4ec0-a4ec-257719a18040\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.180779 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-bundle\") pod \"bedb826e-31a6-4ec0-a4ec-257719a18040\" (UID: \"bedb826e-31a6-4ec0-a4ec-257719a18040\") " Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.181535 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-bundle" (OuterVolumeSpecName: "bundle") pod "bedb826e-31a6-4ec0-a4ec-257719a18040" (UID: "bedb826e-31a6-4ec0-a4ec-257719a18040"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.184642 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedb826e-31a6-4ec0-a4ec-257719a18040-kube-api-access-gffn8" (OuterVolumeSpecName: "kube-api-access-gffn8") pod "bedb826e-31a6-4ec0-a4ec-257719a18040" (UID: "bedb826e-31a6-4ec0-a4ec-257719a18040"). InnerVolumeSpecName "kube-api-access-gffn8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.193007 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-util" (OuterVolumeSpecName: "util") pod "bedb826e-31a6-4ec0-a4ec-257719a18040" (UID: "bedb826e-31a6-4ec0-a4ec-257719a18040"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.282303 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gffn8\" (UniqueName: \"kubernetes.io/projected/bedb826e-31a6-4ec0-a4ec-257719a18040-kube-api-access-gffn8\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.282361 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.282382 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bedb826e-31a6-4ec0-a4ec-257719a18040-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.803393 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" event={"ID":"bedb826e-31a6-4ec0-a4ec-257719a18040","Type":"ContainerDied","Data":"6a166e7f09f1226701ed4d0995ac801cc85850459312e34cced600677e3ab752"} Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.803430 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a166e7f09f1226701ed4d0995ac801cc85850459312e34cced600677e3ab752" Feb 19 00:33:26 crc kubenswrapper[5108]: I0219 00:33:26.803452 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c0952wjz" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.817327 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k"] Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819510 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerName="extract" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819548 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerName="extract" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819588 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerName="util" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819597 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerName="util" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819612 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerName="util" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819622 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerName="util" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819660 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerName="pull" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819671 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerName="pull" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819683 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerName="extract" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819691 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerName="extract" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819702 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerName="pull" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819709 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerName="pull" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819853 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="bedb826e-31a6-4ec0-a4ec-257719a18040" containerName="extract" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.819877 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3cf4bff2-0bb0-4309-a36b-83a7f0829656" containerName="extract" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.829178 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.833818 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k"] Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.834524 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-td7jw\"" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.972590 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tvk9\" (UniqueName: \"kubernetes.io/projected/39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a-kube-api-access-9tvk9\") pod \"smart-gateway-operator-784ccd9b9c-pdw7k\" (UID: \"39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a\") " pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" Feb 19 00:33:32 crc kubenswrapper[5108]: I0219 00:33:32.972721 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a-runner\") pod \"smart-gateway-operator-784ccd9b9c-pdw7k\" (UID: \"39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a\") " pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" Feb 19 00:33:33 crc kubenswrapper[5108]: I0219 00:33:33.073585 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a-runner\") pod \"smart-gateway-operator-784ccd9b9c-pdw7k\" (UID: \"39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a\") " pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" Feb 19 00:33:33 crc kubenswrapper[5108]: I0219 00:33:33.073709 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tvk9\" (UniqueName: \"kubernetes.io/projected/39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a-kube-api-access-9tvk9\") pod \"smart-gateway-operator-784ccd9b9c-pdw7k\" (UID: \"39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a\") " pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" Feb 19 00:33:33 crc kubenswrapper[5108]: I0219 00:33:33.074299 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a-runner\") pod \"smart-gateway-operator-784ccd9b9c-pdw7k\" (UID: \"39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a\") " pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" Feb 19 00:33:33 crc kubenswrapper[5108]: I0219 00:33:33.094622 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tvk9\" (UniqueName: \"kubernetes.io/projected/39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a-kube-api-access-9tvk9\") pod \"smart-gateway-operator-784ccd9b9c-pdw7k\" (UID: \"39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a\") " pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" Feb 19 00:33:33 crc kubenswrapper[5108]: I0219 00:33:33.145790 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" Feb 19 00:33:33 crc kubenswrapper[5108]: I0219 00:33:33.580331 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k"] Feb 19 00:33:33 crc kubenswrapper[5108]: W0219 00:33:33.584007 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39dc5a93_5fd4_4eb5_b298_2039cf1d7b2a.slice/crio-a994ce7c8c664711619ce532ecf179ba3a7ef017deb7546e7f83150ea9ddf1b3 WatchSource:0}: Error finding container a994ce7c8c664711619ce532ecf179ba3a7ef017deb7546e7f83150ea9ddf1b3: Status 404 returned error can't find the container with id a994ce7c8c664711619ce532ecf179ba3a7ef017deb7546e7f83150ea9ddf1b3 Feb 19 00:33:33 crc kubenswrapper[5108]: I0219 00:33:33.866539 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" event={"ID":"39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a","Type":"ContainerStarted","Data":"a994ce7c8c664711619ce532ecf179ba3a7ef017deb7546e7f83150ea9ddf1b3"} Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.046157 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp"] Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.128291 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp"] Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.128441 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.130999 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-2njvk\"" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.289029 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xcw\" (UniqueName: \"kubernetes.io/projected/dcd58415-c463-451c-b96f-d49dadf7fd54-kube-api-access-c6xcw\") pod \"service-telemetry-operator-685f4dcc89-zhvhp\" (UID: \"dcd58415-c463-451c-b96f-d49dadf7fd54\") " pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.289267 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/dcd58415-c463-451c-b96f-d49dadf7fd54-runner\") pod \"service-telemetry-operator-685f4dcc89-zhvhp\" (UID: \"dcd58415-c463-451c-b96f-d49dadf7fd54\") " pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.390847 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/dcd58415-c463-451c-b96f-d49dadf7fd54-runner\") pod \"service-telemetry-operator-685f4dcc89-zhvhp\" (UID: \"dcd58415-c463-451c-b96f-d49dadf7fd54\") " pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.390954 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c6xcw\" (UniqueName: \"kubernetes.io/projected/dcd58415-c463-451c-b96f-d49dadf7fd54-kube-api-access-c6xcw\") pod \"service-telemetry-operator-685f4dcc89-zhvhp\" (UID: \"dcd58415-c463-451c-b96f-d49dadf7fd54\") " pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.391404 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/dcd58415-c463-451c-b96f-d49dadf7fd54-runner\") pod \"service-telemetry-operator-685f4dcc89-zhvhp\" (UID: \"dcd58415-c463-451c-b96f-d49dadf7fd54\") " pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.412616 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6xcw\" (UniqueName: \"kubernetes.io/projected/dcd58415-c463-451c-b96f-d49dadf7fd54-kube-api-access-c6xcw\") pod \"service-telemetry-operator-685f4dcc89-zhvhp\" (UID: \"dcd58415-c463-451c-b96f-d49dadf7fd54\") " pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.446498 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" Feb 19 00:33:34 crc kubenswrapper[5108]: I0219 00:33:34.897879 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp"] Feb 19 00:33:34 crc kubenswrapper[5108]: W0219 00:33:34.933089 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcd58415_c463_451c_b96f_d49dadf7fd54.slice/crio-da63e145c1df1542bb760fd12e411141b28ec919e761ff4b39155e9f9f5e4492 WatchSource:0}: Error finding container da63e145c1df1542bb760fd12e411141b28ec919e761ff4b39155e9f9f5e4492: Status 404 returned error can't find the container with id da63e145c1df1542bb760fd12e411141b28ec919e761ff4b39155e9f9f5e4492 Feb 19 00:33:35 crc kubenswrapper[5108]: I0219 00:33:35.889207 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" event={"ID":"dcd58415-c463-451c-b96f-d49dadf7fd54","Type":"ContainerStarted","Data":"da63e145c1df1542bb760fd12e411141b28ec919e761ff4b39155e9f9f5e4492"} Feb 19 00:33:56 crc kubenswrapper[5108]: I0219 00:33:56.059108 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" event={"ID":"dcd58415-c463-451c-b96f-d49dadf7fd54","Type":"ContainerStarted","Data":"55af3c977e0fd62fecf99d15f40418760705ed81ad2f8de81f9e8945048f5c52"} Feb 19 00:33:56 crc kubenswrapper[5108]: I0219 00:33:56.060351 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" event={"ID":"39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a","Type":"ContainerStarted","Data":"ab81994ea3f9890d546273fef490f6957ba79c0d78a3fa2d29a04530b8228e39"} Feb 19 00:33:56 crc kubenswrapper[5108]: I0219 00:33:56.077021 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-685f4dcc89-zhvhp" podStartSLOduration=1.633456953 podStartE2EDuration="22.077002596s" podCreationTimestamp="2026-02-19 00:33:34 +0000 UTC" firstStartedPulling="2026-02-19 00:33:34.939007766 +0000 UTC m=+1473.905654074" lastFinishedPulling="2026-02-19 00:33:55.382553409 +0000 UTC m=+1494.349199717" observedRunningTime="2026-02-19 00:33:56.073107889 +0000 UTC m=+1495.039754197" watchObservedRunningTime="2026-02-19 00:33:56.077002596 +0000 UTC m=+1495.043648904" Feb 19 00:33:56 crc kubenswrapper[5108]: I0219 00:33:56.095559 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-784ccd9b9c-pdw7k" podStartSLOduration=2.278985499 podStartE2EDuration="24.095537462s" podCreationTimestamp="2026-02-19 00:33:32 +0000 UTC" firstStartedPulling="2026-02-19 00:33:33.586167538 +0000 UTC m=+1472.552813846" lastFinishedPulling="2026-02-19 00:33:55.402719501 +0000 UTC m=+1494.369365809" observedRunningTime="2026-02-19 00:33:56.089563069 +0000 UTC m=+1495.056209397" watchObservedRunningTime="2026-02-19 00:33:56.095537462 +0000 UTC m=+1495.062183770" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.151365 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524354-2ck4f"] Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.159896 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.163088 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.163269 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.163405 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.164333 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524354-2ck4f"] Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.274101 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbjdp\" (UniqueName: \"kubernetes.io/projected/0252aa21-3d8f-424d-a3b4-7d323b1677de-kube-api-access-zbjdp\") pod \"auto-csr-approver-29524354-2ck4f\" (UID: \"0252aa21-3d8f-424d-a3b4-7d323b1677de\") " pod="openshift-infra/auto-csr-approver-29524354-2ck4f" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.375597 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbjdp\" (UniqueName: \"kubernetes.io/projected/0252aa21-3d8f-424d-a3b4-7d323b1677de-kube-api-access-zbjdp\") pod \"auto-csr-approver-29524354-2ck4f\" (UID: \"0252aa21-3d8f-424d-a3b4-7d323b1677de\") " pod="openshift-infra/auto-csr-approver-29524354-2ck4f" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.407127 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbjdp\" (UniqueName: \"kubernetes.io/projected/0252aa21-3d8f-424d-a3b4-7d323b1677de-kube-api-access-zbjdp\") pod \"auto-csr-approver-29524354-2ck4f\" (UID: \"0252aa21-3d8f-424d-a3b4-7d323b1677de\") " pod="openshift-infra/auto-csr-approver-29524354-2ck4f" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.495785 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" Feb 19 00:34:00 crc kubenswrapper[5108]: I0219 00:34:00.723704 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524354-2ck4f"] Feb 19 00:34:01 crc kubenswrapper[5108]: I0219 00:34:01.112549 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" event={"ID":"0252aa21-3d8f-424d-a3b4-7d323b1677de","Type":"ContainerStarted","Data":"66a83426443aaf3b099b684441e7489da5342eac2e6cf158e10db75433aaafef"} Feb 19 00:34:02 crc kubenswrapper[5108]: I0219 00:34:02.122727 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" event={"ID":"0252aa21-3d8f-424d-a3b4-7d323b1677de","Type":"ContainerStarted","Data":"ef57f2e764d47f7c277d3731ef11c36e87f41ba35b7899e45550ab2d58242a03"} Feb 19 00:34:02 crc kubenswrapper[5108]: I0219 00:34:02.139876 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" podStartSLOduration=1.15827609 podStartE2EDuration="2.139858033s" podCreationTimestamp="2026-02-19 00:34:00 +0000 UTC" firstStartedPulling="2026-02-19 00:34:00.745790788 +0000 UTC m=+1499.712437096" lastFinishedPulling="2026-02-19 00:34:01.727372731 +0000 UTC m=+1500.694019039" observedRunningTime="2026-02-19 00:34:02.13938005 +0000 UTC m=+1501.106026368" watchObservedRunningTime="2026-02-19 00:34:02.139858033 +0000 UTC m=+1501.106504341" Feb 19 00:34:02 crc kubenswrapper[5108]: I0219 00:34:02.527091 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:34:02 crc kubenswrapper[5108]: I0219 00:34:02.527573 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:34:02 crc kubenswrapper[5108]: I0219 00:34:02.542013 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:34:02 crc kubenswrapper[5108]: I0219 00:34:02.544331 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:34:03 crc kubenswrapper[5108]: I0219 00:34:03.131484 5108 generic.go:358] "Generic (PLEG): container finished" podID="0252aa21-3d8f-424d-a3b4-7d323b1677de" containerID="ef57f2e764d47f7c277d3731ef11c36e87f41ba35b7899e45550ab2d58242a03" exitCode=0 Feb 19 00:34:03 crc kubenswrapper[5108]: I0219 00:34:03.131559 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" event={"ID":"0252aa21-3d8f-424d-a3b4-7d323b1677de","Type":"ContainerDied","Data":"ef57f2e764d47f7c277d3731ef11c36e87f41ba35b7899e45550ab2d58242a03"} Feb 19 00:34:04 crc kubenswrapper[5108]: I0219 00:34:04.401278 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" Feb 19 00:34:04 crc kubenswrapper[5108]: I0219 00:34:04.534069 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbjdp\" (UniqueName: \"kubernetes.io/projected/0252aa21-3d8f-424d-a3b4-7d323b1677de-kube-api-access-zbjdp\") pod \"0252aa21-3d8f-424d-a3b4-7d323b1677de\" (UID: \"0252aa21-3d8f-424d-a3b4-7d323b1677de\") " Feb 19 00:34:04 crc kubenswrapper[5108]: I0219 00:34:04.539699 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0252aa21-3d8f-424d-a3b4-7d323b1677de-kube-api-access-zbjdp" (OuterVolumeSpecName: "kube-api-access-zbjdp") pod "0252aa21-3d8f-424d-a3b4-7d323b1677de" (UID: "0252aa21-3d8f-424d-a3b4-7d323b1677de"). InnerVolumeSpecName "kube-api-access-zbjdp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:34:04 crc kubenswrapper[5108]: I0219 00:34:04.635076 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbjdp\" (UniqueName: \"kubernetes.io/projected/0252aa21-3d8f-424d-a3b4-7d323b1677de-kube-api-access-zbjdp\") on node \"crc\" DevicePath \"\"" Feb 19 00:34:04 crc kubenswrapper[5108]: I0219 00:34:04.923362 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524348-4tnnb"] Feb 19 00:34:04 crc kubenswrapper[5108]: I0219 00:34:04.933235 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524348-4tnnb"] Feb 19 00:34:05 crc kubenswrapper[5108]: I0219 00:34:05.149505 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" event={"ID":"0252aa21-3d8f-424d-a3b4-7d323b1677de","Type":"ContainerDied","Data":"66a83426443aaf3b099b684441e7489da5342eac2e6cf158e10db75433aaafef"} Feb 19 00:34:05 crc kubenswrapper[5108]: I0219 00:34:05.149545 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66a83426443aaf3b099b684441e7489da5342eac2e6cf158e10db75433aaafef" Feb 19 00:34:05 crc kubenswrapper[5108]: I0219 00:34:05.149609 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524354-2ck4f" Feb 19 00:34:05 crc kubenswrapper[5108]: I0219 00:34:05.855765 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dc4ba1e-79ea-4b20-af59-e6772d445069" path="/var/lib/kubelet/pods/3dc4ba1e-79ea-4b20-af59-e6772d445069/volumes" Feb 19 00:34:06 crc kubenswrapper[5108]: I0219 00:34:06.145329 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:34:06 crc kubenswrapper[5108]: I0219 00:34:06.145402 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:34:10 crc kubenswrapper[5108]: I0219 00:34:10.381502 5108 scope.go:117] "RemoveContainer" containerID="bc850cec2aa38b823413308977cca0d31a88d037ac4227bd23982a3cc26fd299" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.340286 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-d28h2"] Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.341274 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0252aa21-3d8f-424d-a3b4-7d323b1677de" containerName="oc" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.341286 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0252aa21-3d8f-424d-a3b4-7d323b1677de" containerName="oc" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.341403 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0252aa21-3d8f-424d-a3b4-7d323b1677de" containerName="oc" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.496815 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-d28h2"] Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.497005 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.500048 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.500774 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.500888 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.502398 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-xjvz6\"" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.502642 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.502775 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.502908 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.601045 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.601478 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkmk\" (UniqueName: \"kubernetes.io/projected/8b307641-5074-4c53-b22a-03b3689c4b0d-kube-api-access-zfkmk\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.601584 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-config\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.601819 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-users\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.601913 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.602094 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.602170 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.703673 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.703713 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.703752 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.703789 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfkmk\" (UniqueName: \"kubernetes.io/projected/8b307641-5074-4c53-b22a-03b3689c4b0d-kube-api-access-zfkmk\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.703823 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-config\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.703851 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-users\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.703867 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.705641 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-config\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.710142 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.710473 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.710904 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-users\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.718687 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.719920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.726130 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfkmk\" (UniqueName: \"kubernetes.io/projected/8b307641-5074-4c53-b22a-03b3689c4b0d-kube-api-access-zfkmk\") pod \"default-interconnect-55bf8d5cb-d28h2\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:16 crc kubenswrapper[5108]: I0219 00:34:16.814599 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.095596 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hq4pr"] Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.105115 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.105200 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hq4pr"] Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.209827 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzjf\" (UniqueName: \"kubernetes.io/projected/e8fb48ba-47f0-4abb-940f-de3795d93136-kube-api-access-mlzjf\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.209954 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-catalog-content\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.210019 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-utilities\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.300296 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-d28h2"] Feb 19 00:34:17 crc kubenswrapper[5108]: W0219 00:34:17.308859 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b307641_5074_4c53_b22a_03b3689c4b0d.slice/crio-9063d3370a7eeecb1da239bbc047620eee8a96baa3b698d801ff4a99f89879d2 WatchSource:0}: Error finding container 9063d3370a7eeecb1da239bbc047620eee8a96baa3b698d801ff4a99f89879d2: Status 404 returned error can't find the container with id 9063d3370a7eeecb1da239bbc047620eee8a96baa3b698d801ff4a99f89879d2 Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.310896 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-catalog-content\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.310996 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-utilities\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.311072 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlzjf\" (UniqueName: \"kubernetes.io/projected/e8fb48ba-47f0-4abb-940f-de3795d93136-kube-api-access-mlzjf\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.311841 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-catalog-content\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.312531 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-utilities\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.334901 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlzjf\" (UniqueName: \"kubernetes.io/projected/e8fb48ba-47f0-4abb-940f-de3795d93136-kube-api-access-mlzjf\") pod \"community-operators-hq4pr\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.434448 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:17 crc kubenswrapper[5108]: I0219 00:34:17.666576 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hq4pr"] Feb 19 00:34:17 crc kubenswrapper[5108]: W0219 00:34:17.670126 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8fb48ba_47f0_4abb_940f_de3795d93136.slice/crio-bd949dfa546f922e96223a561a22cc17e871d1523bbf2f1cb708421bd091b2ef WatchSource:0}: Error finding container bd949dfa546f922e96223a561a22cc17e871d1523bbf2f1cb708421bd091b2ef: Status 404 returned error can't find the container with id bd949dfa546f922e96223a561a22cc17e871d1523bbf2f1cb708421bd091b2ef Feb 19 00:34:18 crc kubenswrapper[5108]: I0219 00:34:18.239458 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" event={"ID":"8b307641-5074-4c53-b22a-03b3689c4b0d","Type":"ContainerStarted","Data":"9063d3370a7eeecb1da239bbc047620eee8a96baa3b698d801ff4a99f89879d2"} Feb 19 00:34:18 crc kubenswrapper[5108]: I0219 00:34:18.241415 5108 generic.go:358] "Generic (PLEG): container finished" podID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerID="eb1654be25831839a0d953dd0e7c03953fdcdff9fb8aac17037e569e1d50c3b7" exitCode=0 Feb 19 00:34:18 crc kubenswrapper[5108]: I0219 00:34:18.241524 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hq4pr" event={"ID":"e8fb48ba-47f0-4abb-940f-de3795d93136","Type":"ContainerDied","Data":"eb1654be25831839a0d953dd0e7c03953fdcdff9fb8aac17037e569e1d50c3b7"} Feb 19 00:34:18 crc kubenswrapper[5108]: I0219 00:34:18.241560 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hq4pr" event={"ID":"e8fb48ba-47f0-4abb-940f-de3795d93136","Type":"ContainerStarted","Data":"bd949dfa546f922e96223a561a22cc17e871d1523bbf2f1cb708421bd091b2ef"} Feb 19 00:34:23 crc kubenswrapper[5108]: I0219 00:34:23.296707 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" event={"ID":"8b307641-5074-4c53-b22a-03b3689c4b0d","Type":"ContainerStarted","Data":"ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67"} Feb 19 00:34:23 crc kubenswrapper[5108]: I0219 00:34:23.334199 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" podStartSLOduration=2.369853244 podStartE2EDuration="7.334164171s" podCreationTimestamp="2026-02-19 00:34:16 +0000 UTC" firstStartedPulling="2026-02-19 00:34:17.312565161 +0000 UTC m=+1516.279211489" lastFinishedPulling="2026-02-19 00:34:22.276876078 +0000 UTC m=+1521.243522416" observedRunningTime="2026-02-19 00:34:23.320436736 +0000 UTC m=+1522.287083054" watchObservedRunningTime="2026-02-19 00:34:23.334164171 +0000 UTC m=+1522.300810509" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.831725 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.843457 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.844269 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.878238 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-t8fk9\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.878527 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.878685 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.878866 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.879113 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.879273 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.879420 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.879992 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.880086 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.880324 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977540 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c3b1c9a1-0686-4d2c-8fe0-0fb3e34bb289\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3b1c9a1-0686-4d2c-8fe0-0fb3e34bb289\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977597 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977616 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-web-config\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977633 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-config\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977656 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3323d343-e59b-4ad7-a4bc-8ccedb940dee-tls-assets\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977788 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977843 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mq5\" (UniqueName: \"kubernetes.io/projected/3323d343-e59b-4ad7-a4bc-8ccedb940dee-kube-api-access-99mq5\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977864 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.977974 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.978004 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3323d343-e59b-4ad7-a4bc-8ccedb940dee-config-out\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.978026 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:26 crc kubenswrapper[5108]: I0219 00:34:26.978070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.079849 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.079899 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3323d343-e59b-4ad7-a4bc-8ccedb940dee-config-out\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080010 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080055 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080125 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-c3b1c9a1-0686-4d2c-8fe0-0fb3e34bb289\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3b1c9a1-0686-4d2c-8fe0-0fb3e34bb289\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080153 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080178 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-web-config\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080196 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-config\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080226 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3323d343-e59b-4ad7-a4bc-8ccedb940dee-tls-assets\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080250 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080274 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-99mq5\" (UniqueName: \"kubernetes.io/projected/3323d343-e59b-4ad7-a4bc-8ccedb940dee-kube-api-access-99mq5\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.080291 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.081057 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: E0219 00:34:27.081140 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 19 00:34:27 crc kubenswrapper[5108]: E0219 00:34:27.081195 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls podName:3323d343-e59b-4ad7-a4bc-8ccedb940dee nodeName:}" failed. No retries permitted until 2026-02-19 00:34:27.581180134 +0000 UTC m=+1526.547826442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3323d343-e59b-4ad7-a4bc-8ccedb940dee") : secret "default-prometheus-proxy-tls" not found Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.085634 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.085769 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.085798 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-c3b1c9a1-0686-4d2c-8fe0-0fb3e34bb289\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3b1c9a1-0686-4d2c-8fe0-0fb3e34bb289\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e2f95179a4cd55d28db1167c18dd91b9d7972d037d0c1d6a07691176a787944d/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.085707 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.086033 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3323d343-e59b-4ad7-a4bc-8ccedb940dee-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.094668 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3323d343-e59b-4ad7-a4bc-8ccedb940dee-tls-assets\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.095923 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3323d343-e59b-4ad7-a4bc-8ccedb940dee-config-out\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.096439 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-config\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.097189 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.099234 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-web-config\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.103202 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-99mq5\" (UniqueName: \"kubernetes.io/projected/3323d343-e59b-4ad7-a4bc-8ccedb940dee-kube-api-access-99mq5\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.141428 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-c3b1c9a1-0686-4d2c-8fe0-0fb3e34bb289\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3b1c9a1-0686-4d2c-8fe0-0fb3e34bb289\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:27 crc kubenswrapper[5108]: E0219 00:34:27.588277 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 19 00:34:27 crc kubenswrapper[5108]: E0219 00:34:27.588763 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls podName:3323d343-e59b-4ad7-a4bc-8ccedb940dee nodeName:}" failed. No retries permitted until 2026-02-19 00:34:28.588733973 +0000 UTC m=+1527.555380311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3323d343-e59b-4ad7-a4bc-8ccedb940dee") : secret "default-prometheus-proxy-tls" not found Feb 19 00:34:27 crc kubenswrapper[5108]: I0219 00:34:27.588094 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:28 crc kubenswrapper[5108]: I0219 00:34:28.605558 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:28 crc kubenswrapper[5108]: I0219 00:34:28.613800 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3323d343-e59b-4ad7-a4bc-8ccedb940dee-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3323d343-e59b-4ad7-a4bc-8ccedb940dee\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:34:28 crc kubenswrapper[5108]: I0219 00:34:28.691399 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 19 00:34:28 crc kubenswrapper[5108]: I0219 00:34:28.979292 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:34:29 crc kubenswrapper[5108]: I0219 00:34:29.355034 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3323d343-e59b-4ad7-a4bc-8ccedb940dee","Type":"ContainerStarted","Data":"5c5c108733946ee96866af3b90b250a5e7e82fbf72710c89cc41ff963022e167"} Feb 19 00:34:32 crc kubenswrapper[5108]: I0219 00:34:32.383038 5108 generic.go:358] "Generic (PLEG): container finished" podID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerID="59c0a0cf2917702617b2a8ca2fbbc96c7e430dcf557f32b559d9002550d0c4df" exitCode=0 Feb 19 00:34:32 crc kubenswrapper[5108]: I0219 00:34:32.383210 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hq4pr" event={"ID":"e8fb48ba-47f0-4abb-940f-de3795d93136","Type":"ContainerDied","Data":"59c0a0cf2917702617b2a8ca2fbbc96c7e430dcf557f32b559d9002550d0c4df"} Feb 19 00:34:33 crc kubenswrapper[5108]: I0219 00:34:33.402181 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hq4pr" event={"ID":"e8fb48ba-47f0-4abb-940f-de3795d93136","Type":"ContainerStarted","Data":"b900a1cee926bd42413dce1dcbd4321f5bad551e0ebaafbea62b650031bb57d4"} Feb 19 00:34:33 crc kubenswrapper[5108]: I0219 00:34:33.424462 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hq4pr" podStartSLOduration=2.673173358 podStartE2EDuration="16.424444873s" podCreationTimestamp="2026-02-19 00:34:17 +0000 UTC" firstStartedPulling="2026-02-19 00:34:18.242454771 +0000 UTC m=+1517.209101079" lastFinishedPulling="2026-02-19 00:34:31.993726246 +0000 UTC m=+1530.960372594" observedRunningTime="2026-02-19 00:34:33.420986419 +0000 UTC m=+1532.387632747" watchObservedRunningTime="2026-02-19 00:34:33.424444873 +0000 UTC m=+1532.391091181" Feb 19 00:34:34 crc kubenswrapper[5108]: I0219 00:34:34.411211 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3323d343-e59b-4ad7-a4bc-8ccedb940dee","Type":"ContainerStarted","Data":"573f96f31011f03cb373ac04b1016147064e4f3ace2f23b6de3aeba10ec7267a"} Feb 19 00:34:36 crc kubenswrapper[5108]: I0219 00:34:36.144758 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:34:36 crc kubenswrapper[5108]: I0219 00:34:36.144857 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:34:36 crc kubenswrapper[5108]: I0219 00:34:36.683203 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-dw958"] Feb 19 00:34:36 crc kubenswrapper[5108]: I0219 00:34:36.689717 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-dw958" Feb 19 00:34:36 crc kubenswrapper[5108]: I0219 00:34:36.697745 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-dw958"] Feb 19 00:34:36 crc kubenswrapper[5108]: I0219 00:34:36.732332 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9gdf\" (UniqueName: \"kubernetes.io/projected/21868270-1946-4c6b-9aec-fac51ff7301b-kube-api-access-t9gdf\") pod \"default-snmp-webhook-694dc457d5-dw958\" (UID: \"21868270-1946-4c6b-9aec-fac51ff7301b\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-dw958" Feb 19 00:34:36 crc kubenswrapper[5108]: I0219 00:34:36.833686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t9gdf\" (UniqueName: \"kubernetes.io/projected/21868270-1946-4c6b-9aec-fac51ff7301b-kube-api-access-t9gdf\") pod \"default-snmp-webhook-694dc457d5-dw958\" (UID: \"21868270-1946-4c6b-9aec-fac51ff7301b\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-dw958" Feb 19 00:34:36 crc kubenswrapper[5108]: I0219 00:34:36.857699 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9gdf\" (UniqueName: \"kubernetes.io/projected/21868270-1946-4c6b-9aec-fac51ff7301b-kube-api-access-t9gdf\") pod \"default-snmp-webhook-694dc457d5-dw958\" (UID: \"21868270-1946-4c6b-9aec-fac51ff7301b\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-dw958" Feb 19 00:34:37 crc kubenswrapper[5108]: I0219 00:34:37.013364 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-dw958" Feb 19 00:34:37 crc kubenswrapper[5108]: I0219 00:34:37.231018 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-dw958"] Feb 19 00:34:37 crc kubenswrapper[5108]: I0219 00:34:37.438181 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:37 crc kubenswrapper[5108]: I0219 00:34:37.438236 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:37 crc kubenswrapper[5108]: I0219 00:34:37.445519 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-dw958" event={"ID":"21868270-1946-4c6b-9aec-fac51ff7301b","Type":"ContainerStarted","Data":"d995ea5fb768176bc1e5dcb058d23271648fb629d180690a43b4ffa4cf505b5a"} Feb 19 00:34:37 crc kubenswrapper[5108]: I0219 00:34:37.500659 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:38 crc kubenswrapper[5108]: I0219 00:34:38.510097 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:38 crc kubenswrapper[5108]: I0219 00:34:38.559677 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hq4pr"] Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.269259 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.302162 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.302208 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.306355 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.306434 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.306503 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.306566 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.306775 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-kp7dp\"" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.307743 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.382524 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.382599 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btlkd\" (UniqueName: \"kubernetes.io/projected/edf4dc5b-ac62-4280-8090-05fc1d198800-kube-api-access-btlkd\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.382666 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.382814 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/edf4dc5b-ac62-4280-8090-05fc1d198800-tls-assets\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.382843 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-config-volume\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.382903 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/edf4dc5b-ac62-4280-8090-05fc1d198800-config-out\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.383235 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-58f0d589-67ce-418c-8862-d0d16af84e34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f0d589-67ce-418c-8862-d0d16af84e34\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.383268 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-web-config\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.383287 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.467723 5108 generic.go:358] "Generic (PLEG): container finished" podID="3323d343-e59b-4ad7-a4bc-8ccedb940dee" containerID="573f96f31011f03cb373ac04b1016147064e4f3ace2f23b6de3aeba10ec7267a" exitCode=0 Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.468033 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hq4pr" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerName="registry-server" containerID="cri-o://b900a1cee926bd42413dce1dcbd4321f5bad551e0ebaafbea62b650031bb57d4" gracePeriod=2 Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.468290 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3323d343-e59b-4ad7-a4bc-8ccedb940dee","Type":"ContainerDied","Data":"573f96f31011f03cb373ac04b1016147064e4f3ace2f23b6de3aeba10ec7267a"} Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.484928 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/edf4dc5b-ac62-4280-8090-05fc1d198800-config-out\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.485015 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-58f0d589-67ce-418c-8862-d0d16af84e34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f0d589-67ce-418c-8862-d0d16af84e34\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.485050 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-web-config\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.485078 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.485115 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.485143 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-btlkd\" (UniqueName: \"kubernetes.io/projected/edf4dc5b-ac62-4280-8090-05fc1d198800-kube-api-access-btlkd\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.485181 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.485253 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/edf4dc5b-ac62-4280-8090-05fc1d198800-tls-assets\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.485285 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-config-volume\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: E0219 00:34:40.486476 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:34:40 crc kubenswrapper[5108]: E0219 00:34:40.486547 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls podName:edf4dc5b-ac62-4280-8090-05fc1d198800 nodeName:}" failed. No retries permitted until 2026-02-19 00:34:40.986524406 +0000 UTC m=+1539.953170714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "edf4dc5b-ac62-4280-8090-05fc1d198800") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.493214 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.493264 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-58f0d589-67ce-418c-8862-d0d16af84e34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f0d589-67ce-418c-8862-d0d16af84e34\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18028fdf1a7b9690ccb7cb857f6421bdfb82f27f3fe90dff205aff4fa09f81b0/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.496439 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.496888 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/edf4dc5b-ac62-4280-8090-05fc1d198800-config-out\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.500649 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-config-volume\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.501884 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.508500 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-btlkd\" (UniqueName: \"kubernetes.io/projected/edf4dc5b-ac62-4280-8090-05fc1d198800-kube-api-access-btlkd\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.509487 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-web-config\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.511160 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/edf4dc5b-ac62-4280-8090-05fc1d198800-tls-assets\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.544889 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-58f0d589-67ce-418c-8862-d0d16af84e34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f0d589-67ce-418c-8862-d0d16af84e34\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: I0219 00:34:40.994638 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:40 crc kubenswrapper[5108]: E0219 00:34:40.994850 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:34:40 crc kubenswrapper[5108]: E0219 00:34:40.994973 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls podName:edf4dc5b-ac62-4280-8090-05fc1d198800 nodeName:}" failed. No retries permitted until 2026-02-19 00:34:41.994948189 +0000 UTC m=+1540.961594497 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "edf4dc5b-ac62-4280-8090-05fc1d198800") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:34:41 crc kubenswrapper[5108]: I0219 00:34:41.480126 5108 generic.go:358] "Generic (PLEG): container finished" podID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerID="b900a1cee926bd42413dce1dcbd4321f5bad551e0ebaafbea62b650031bb57d4" exitCode=0 Feb 19 00:34:41 crc kubenswrapper[5108]: I0219 00:34:41.480234 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hq4pr" event={"ID":"e8fb48ba-47f0-4abb-940f-de3795d93136","Type":"ContainerDied","Data":"b900a1cee926bd42413dce1dcbd4321f5bad551e0ebaafbea62b650031bb57d4"} Feb 19 00:34:42 crc kubenswrapper[5108]: I0219 00:34:42.011515 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:42 crc kubenswrapper[5108]: E0219 00:34:42.011707 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:34:42 crc kubenswrapper[5108]: E0219 00:34:42.011800 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls podName:edf4dc5b-ac62-4280-8090-05fc1d198800 nodeName:}" failed. No retries permitted until 2026-02-19 00:34:44.011777695 +0000 UTC m=+1542.978423993 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "edf4dc5b-ac62-4280-8090-05fc1d198800") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:34:44 crc kubenswrapper[5108]: I0219 00:34:44.061023 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:44 crc kubenswrapper[5108]: I0219 00:34:44.069462 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edf4dc5b-ac62-4280-8090-05fc1d198800-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"edf4dc5b-ac62-4280-8090-05fc1d198800\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:44 crc kubenswrapper[5108]: I0219 00:34:44.230409 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 19 00:34:44 crc kubenswrapper[5108]: I0219 00:34:44.850702 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:44 crc kubenswrapper[5108]: I0219 00:34:44.978683 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-catalog-content\") pod \"e8fb48ba-47f0-4abb-940f-de3795d93136\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " Feb 19 00:34:44 crc kubenswrapper[5108]: I0219 00:34:44.979432 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlzjf\" (UniqueName: \"kubernetes.io/projected/e8fb48ba-47f0-4abb-940f-de3795d93136-kube-api-access-mlzjf\") pod \"e8fb48ba-47f0-4abb-940f-de3795d93136\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " Feb 19 00:34:44 crc kubenswrapper[5108]: I0219 00:34:44.979538 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-utilities\") pod \"e8fb48ba-47f0-4abb-940f-de3795d93136\" (UID: \"e8fb48ba-47f0-4abb-940f-de3795d93136\") " Feb 19 00:34:44 crc kubenswrapper[5108]: I0219 00:34:44.981399 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-utilities" (OuterVolumeSpecName: "utilities") pod "e8fb48ba-47f0-4abb-940f-de3795d93136" (UID: "e8fb48ba-47f0-4abb-940f-de3795d93136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.000950 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fb48ba-47f0-4abb-940f-de3795d93136-kube-api-access-mlzjf" (OuterVolumeSpecName: "kube-api-access-mlzjf") pod "e8fb48ba-47f0-4abb-940f-de3795d93136" (UID: "e8fb48ba-47f0-4abb-940f-de3795d93136"). InnerVolumeSpecName "kube-api-access-mlzjf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.036105 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.051188 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8fb48ba-47f0-4abb-940f-de3795d93136" (UID: "e8fb48ba-47f0-4abb-940f-de3795d93136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.081053 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.081088 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8fb48ba-47f0-4abb-940f-de3795d93136-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.081101 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mlzjf\" (UniqueName: \"kubernetes.io/projected/e8fb48ba-47f0-4abb-940f-de3795d93136-kube-api-access-mlzjf\") on node \"crc\" DevicePath \"\"" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.511861 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hq4pr" event={"ID":"e8fb48ba-47f0-4abb-940f-de3795d93136","Type":"ContainerDied","Data":"bd949dfa546f922e96223a561a22cc17e871d1523bbf2f1cb708421bd091b2ef"} Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.511946 5108 scope.go:117] "RemoveContainer" containerID="b900a1cee926bd42413dce1dcbd4321f5bad551e0ebaafbea62b650031bb57d4" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.512075 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hq4pr" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.523877 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-dw958" event={"ID":"21868270-1946-4c6b-9aec-fac51ff7301b","Type":"ContainerStarted","Data":"2035bb4452c57958f4faa33682d3de3d76756e600b465097c0ec06267a20a442"} Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.526984 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"edf4dc5b-ac62-4280-8090-05fc1d198800","Type":"ContainerStarted","Data":"93e53a1986370ff323f91bb40866fed537ca95f3ed28af362ead5e18c2eb4710"} Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.539302 5108 scope.go:117] "RemoveContainer" containerID="59c0a0cf2917702617b2a8ca2fbbc96c7e430dcf557f32b559d9002550d0c4df" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.559165 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-dw958" podStartSLOduration=1.985218624 podStartE2EDuration="9.559071971s" podCreationTimestamp="2026-02-19 00:34:36 +0000 UTC" firstStartedPulling="2026-02-19 00:34:37.232925626 +0000 UTC m=+1536.199571934" lastFinishedPulling="2026-02-19 00:34:44.806778963 +0000 UTC m=+1543.773425281" observedRunningTime="2026-02-19 00:34:45.536380461 +0000 UTC m=+1544.503026769" watchObservedRunningTime="2026-02-19 00:34:45.559071971 +0000 UTC m=+1544.525718279" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.568688 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hq4pr"] Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.571911 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hq4pr"] Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.587101 5108 scope.go:117] "RemoveContainer" containerID="eb1654be25831839a0d953dd0e7c03953fdcdff9fb8aac17037e569e1d50c3b7" Feb 19 00:34:45 crc kubenswrapper[5108]: I0219 00:34:45.854880 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" path="/var/lib/kubelet/pods/e8fb48ba-47f0-4abb-940f-de3795d93136/volumes" Feb 19 00:34:47 crc kubenswrapper[5108]: I0219 00:34:47.544324 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"edf4dc5b-ac62-4280-8090-05fc1d198800","Type":"ContainerStarted","Data":"4f55e5a0915a92af61427e6a4453208d1085ada00f1ac2e986310557a8df412f"} Feb 19 00:34:50 crc kubenswrapper[5108]: I0219 00:34:50.575122 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3323d343-e59b-4ad7-a4bc-8ccedb940dee","Type":"ContainerStarted","Data":"ea676cb7fd4a830a9c12816a111022b441e1c52f24ea4d476a759b608a82aaac"} Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.422982 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7"] Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.429405 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerName="extract-content" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.429450 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerName="extract-content" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.429478 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerName="registry-server" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.429485 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerName="registry-server" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.429516 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerName="extract-utilities" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.429525 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerName="extract-utilities" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.429660 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e8fb48ba-47f0-4abb-940f-de3795d93136" containerName="registry-server" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.433843 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7"] Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.434015 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.436607 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.436772 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.437015 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.437176 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-t4fgn\"" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.505883 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c1b23465-43aa-4a3e-9617-5137e887360c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.505979 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.506101 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.506262 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p972s\" (UniqueName: \"kubernetes.io/projected/c1b23465-43aa-4a3e-9617-5137e887360c-kube-api-access-p972s\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.506407 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c1b23465-43aa-4a3e-9617-5137e887360c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.607747 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p972s\" (UniqueName: \"kubernetes.io/projected/c1b23465-43aa-4a3e-9617-5137e887360c-kube-api-access-p972s\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.607821 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c1b23465-43aa-4a3e-9617-5137e887360c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.607855 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c1b23465-43aa-4a3e-9617-5137e887360c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.607895 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.607983 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: E0219 00:34:53.608186 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:34:53 crc kubenswrapper[5108]: E0219 00:34:53.608262 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls podName:c1b23465-43aa-4a3e-9617-5137e887360c nodeName:}" failed. No retries permitted until 2026-02-19 00:34:54.108237877 +0000 UTC m=+1553.074884195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" (UID: "c1b23465-43aa-4a3e-9617-5137e887360c") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.609017 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c1b23465-43aa-4a3e-9617-5137e887360c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.609124 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c1b23465-43aa-4a3e-9617-5137e887360c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.620170 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.622239 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3323d343-e59b-4ad7-a4bc-8ccedb940dee","Type":"ContainerStarted","Data":"5bea0f4e6be21cd1e9ad4321884587d78ef67ca5a6e757042d9dae9e946bb7f8"} Feb 19 00:34:53 crc kubenswrapper[5108]: I0219 00:34:53.629876 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p972s\" (UniqueName: \"kubernetes.io/projected/c1b23465-43aa-4a3e-9617-5137e887360c-kube-api-access-p972s\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:54 crc kubenswrapper[5108]: I0219 00:34:54.113740 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:54 crc kubenswrapper[5108]: E0219 00:34:54.114044 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:34:54 crc kubenswrapper[5108]: E0219 00:34:54.114129 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls podName:c1b23465-43aa-4a3e-9617-5137e887360c nodeName:}" failed. No retries permitted until 2026-02-19 00:34:55.11411218 +0000 UTC m=+1554.080758488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" (UID: "c1b23465-43aa-4a3e-9617-5137e887360c") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:34:54 crc kubenswrapper[5108]: I0219 00:34:54.630808 5108 generic.go:358] "Generic (PLEG): container finished" podID="edf4dc5b-ac62-4280-8090-05fc1d198800" containerID="4f55e5a0915a92af61427e6a4453208d1085ada00f1ac2e986310557a8df412f" exitCode=0 Feb 19 00:34:54 crc kubenswrapper[5108]: I0219 00:34:54.631078 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"edf4dc5b-ac62-4280-8090-05fc1d198800","Type":"ContainerDied","Data":"4f55e5a0915a92af61427e6a4453208d1085ada00f1ac2e986310557a8df412f"} Feb 19 00:34:55 crc kubenswrapper[5108]: I0219 00:34:55.130087 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:55 crc kubenswrapper[5108]: I0219 00:34:55.136841 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1b23465-43aa-4a3e-9617-5137e887360c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7\" (UID: \"c1b23465-43aa-4a3e-9617-5137e887360c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:55 crc kubenswrapper[5108]: I0219 00:34:55.285581 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.491219 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9"] Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.565046 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9"] Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.565211 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.567861 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.568080 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.649585 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fm5h\" (UniqueName: \"kubernetes.io/projected/bf32a308-7483-43c7-80ec-21496776f93c-kube-api-access-5fm5h\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.649637 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.649747 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/bf32a308-7483-43c7-80ec-21496776f93c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.649782 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/bf32a308-7483-43c7-80ec-21496776f93c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.649844 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.750698 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/bf32a308-7483-43c7-80ec-21496776f93c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.750796 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.750851 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fm5h\" (UniqueName: \"kubernetes.io/projected/bf32a308-7483-43c7-80ec-21496776f93c-kube-api-access-5fm5h\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.750870 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.750926 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/bf32a308-7483-43c7-80ec-21496776f93c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: E0219 00:34:56.751010 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:34:56 crc kubenswrapper[5108]: E0219 00:34:56.751101 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls podName:bf32a308-7483-43c7-80ec-21496776f93c nodeName:}" failed. No retries permitted until 2026-02-19 00:34:57.251080282 +0000 UTC m=+1556.217726580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" (UID: "bf32a308-7483-43c7-80ec-21496776f93c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.751588 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/bf32a308-7483-43c7-80ec-21496776f93c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.752476 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/bf32a308-7483-43c7-80ec-21496776f93c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.756649 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:56 crc kubenswrapper[5108]: I0219 00:34:56.781027 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fm5h\" (UniqueName: \"kubernetes.io/projected/bf32a308-7483-43c7-80ec-21496776f93c-kube-api-access-5fm5h\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:57 crc kubenswrapper[5108]: I0219 00:34:57.257094 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:57 crc kubenswrapper[5108]: E0219 00:34:57.257293 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:34:57 crc kubenswrapper[5108]: E0219 00:34:57.257390 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls podName:bf32a308-7483-43c7-80ec-21496776f93c nodeName:}" failed. No retries permitted until 2026-02-19 00:34:58.257367965 +0000 UTC m=+1557.224014273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" (UID: "bf32a308-7483-43c7-80ec-21496776f93c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:34:58 crc kubenswrapper[5108]: I0219 00:34:58.271793 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:58 crc kubenswrapper[5108]: I0219 00:34:58.276200 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bf32a308-7483-43c7-80ec-21496776f93c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9\" (UID: \"bf32a308-7483-43c7-80ec-21496776f93c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:58 crc kubenswrapper[5108]: I0219 00:34:58.400052 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" Feb 19 00:34:59 crc kubenswrapper[5108]: I0219 00:34:59.358198 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7"] Feb 19 00:34:59 crc kubenswrapper[5108]: I0219 00:34:59.429245 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9"] Feb 19 00:34:59 crc kubenswrapper[5108]: I0219 00:34:59.666197 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3323d343-e59b-4ad7-a4bc-8ccedb940dee","Type":"ContainerStarted","Data":"8a08d67acbb1526e16a3019d552c9b3c32ca193774d47478f3301186ec39b876"} Feb 19 00:34:59 crc kubenswrapper[5108]: I0219 00:34:59.667474 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" event={"ID":"c1b23465-43aa-4a3e-9617-5137e887360c","Type":"ContainerStarted","Data":"8f45e48505b08a207707d212b0a1d86b3f8aaecb3c6b796ce76788b9262651e4"} Feb 19 00:34:59 crc kubenswrapper[5108]: I0219 00:34:59.696842 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.715544559 podStartE2EDuration="34.696826458s" podCreationTimestamp="2026-02-19 00:34:25 +0000 UTC" firstStartedPulling="2026-02-19 00:34:28.997591162 +0000 UTC m=+1527.964237480" lastFinishedPulling="2026-02-19 00:34:58.978873071 +0000 UTC m=+1557.945519379" observedRunningTime="2026-02-19 00:34:59.692761747 +0000 UTC m=+1558.659408055" watchObservedRunningTime="2026-02-19 00:34:59.696826458 +0000 UTC m=+1558.663472766" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.588871 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr"] Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.599400 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.602792 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.603062 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.603117 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt595\" (UniqueName: \"kubernetes.io/projected/448a5226-c34a-469e-bc72-79158e2b2c92-kube-api-access-qt595\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.603159 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.603202 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/448a5226-c34a-469e-bc72-79158e2b2c92-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.603261 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/448a5226-c34a-469e-bc72-79158e2b2c92-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.603412 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.628466 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr"] Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.688747 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" event={"ID":"bf32a308-7483-43c7-80ec-21496776f93c","Type":"ContainerStarted","Data":"2faeaa3cfe6583b0af9c93a3dbb5fe7ec65006bd3102ad9344837f4287788386"} Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.704414 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/448a5226-c34a-469e-bc72-79158e2b2c92-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.704551 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.704608 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qt595\" (UniqueName: \"kubernetes.io/projected/448a5226-c34a-469e-bc72-79158e2b2c92-kube-api-access-qt595\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.704658 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.704737 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/448a5226-c34a-469e-bc72-79158e2b2c92-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: E0219 00:35:00.705310 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:35:00 crc kubenswrapper[5108]: E0219 00:35:00.705363 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls podName:448a5226-c34a-469e-bc72-79158e2b2c92 nodeName:}" failed. No retries permitted until 2026-02-19 00:35:01.205347662 +0000 UTC m=+1560.171993970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" (UID: "448a5226-c34a-469e-bc72-79158e2b2c92") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.705737 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/448a5226-c34a-469e-bc72-79158e2b2c92-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.707092 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/448a5226-c34a-469e-bc72-79158e2b2c92-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.711088 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:00 crc kubenswrapper[5108]: I0219 00:35:00.729252 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt595\" (UniqueName: \"kubernetes.io/projected/448a5226-c34a-469e-bc72-79158e2b2c92-kube-api-access-qt595\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:01 crc kubenswrapper[5108]: I0219 00:35:01.211553 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:01 crc kubenswrapper[5108]: E0219 00:35:01.211759 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:35:01 crc kubenswrapper[5108]: E0219 00:35:01.212098 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls podName:448a5226-c34a-469e-bc72-79158e2b2c92 nodeName:}" failed. No retries permitted until 2026-02-19 00:35:02.212071836 +0000 UTC m=+1561.178718164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" (UID: "448a5226-c34a-469e-bc72-79158e2b2c92") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:35:01 crc kubenswrapper[5108]: I0219 00:35:01.719610 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" event={"ID":"c1b23465-43aa-4a3e-9617-5137e887360c","Type":"ContainerStarted","Data":"a4825a17a4f8bff2d2016c13209ecfd731f7a2e2849aa38c620745bcea311466"} Feb 19 00:35:01 crc kubenswrapper[5108]: I0219 00:35:01.725587 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" event={"ID":"bf32a308-7483-43c7-80ec-21496776f93c","Type":"ContainerStarted","Data":"6fa9df6c3235e6107980022f3026ac178994a0e2ae82a14bd8892a44fc2374d7"} Feb 19 00:35:01 crc kubenswrapper[5108]: I0219 00:35:01.727489 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"edf4dc5b-ac62-4280-8090-05fc1d198800","Type":"ContainerStarted","Data":"bbfc45a1e7a5e50910d638fe1e82e89ed1720e86fc61e97f56337b9532913733"} Feb 19 00:35:02 crc kubenswrapper[5108]: I0219 00:35:02.225831 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:02 crc kubenswrapper[5108]: I0219 00:35:02.249555 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/448a5226-c34a-469e-bc72-79158e2b2c92-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr\" (UID: \"448a5226-c34a-469e-bc72-79158e2b2c92\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:02 crc kubenswrapper[5108]: I0219 00:35:02.487143 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" Feb 19 00:35:02 crc kubenswrapper[5108]: I0219 00:35:02.755111 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"edf4dc5b-ac62-4280-8090-05fc1d198800","Type":"ContainerStarted","Data":"87a5efeca5befff105d513505ed822cd556e7242c5fed1260f5bb7eb50c06d11"} Feb 19 00:35:02 crc kubenswrapper[5108]: I0219 00:35:02.761333 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" event={"ID":"c1b23465-43aa-4a3e-9617-5137e887360c","Type":"ContainerStarted","Data":"ff214db2051de0b9b8bc2417fd7fb7b78f44e38dde96ab82db26c83310606cdb"} Feb 19 00:35:02 crc kubenswrapper[5108]: I0219 00:35:02.768613 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" event={"ID":"bf32a308-7483-43c7-80ec-21496776f93c","Type":"ContainerStarted","Data":"c038e62f8bae226d540731a502d6be108ab64845eccb062753b420c19fe80766"} Feb 19 00:35:02 crc kubenswrapper[5108]: I0219 00:35:02.956392 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr"] Feb 19 00:35:03 crc kubenswrapper[5108]: I0219 00:35:03.692051 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 19 00:35:03 crc kubenswrapper[5108]: I0219 00:35:03.792449 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" event={"ID":"448a5226-c34a-469e-bc72-79158e2b2c92","Type":"ContainerStarted","Data":"ffe73440a2e558dfdd027b8658e2c0fb3821afe0848919db3c0cce247cbd675b"} Feb 19 00:35:03 crc kubenswrapper[5108]: I0219 00:35:03.795292 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"edf4dc5b-ac62-4280-8090-05fc1d198800","Type":"ContainerStarted","Data":"996fb06def151ccba444481bc316c05645752d14d631920829a08b490973e3e9"} Feb 19 00:35:03 crc kubenswrapper[5108]: I0219 00:35:03.832135 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=16.271623103 podStartE2EDuration="24.832112506s" podCreationTimestamp="2026-02-19 00:34:39 +0000 UTC" firstStartedPulling="2026-02-19 00:34:54.6318918 +0000 UTC m=+1553.598538108" lastFinishedPulling="2026-02-19 00:35:03.192381203 +0000 UTC m=+1562.159027511" observedRunningTime="2026-02-19 00:35:03.819577144 +0000 UTC m=+1562.786223452" watchObservedRunningTime="2026-02-19 00:35:03.832112506 +0000 UTC m=+1562.798758814" Feb 19 00:35:04 crc kubenswrapper[5108]: I0219 00:35:04.804663 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" event={"ID":"448a5226-c34a-469e-bc72-79158e2b2c92","Type":"ContainerStarted","Data":"b8e7f15f5262becc83ba530e0d351c1e03ac86d24b8a3e76c9b4a9e3af6fa4f3"} Feb 19 00:35:04 crc kubenswrapper[5108]: I0219 00:35:04.805034 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" event={"ID":"448a5226-c34a-469e-bc72-79158e2b2c92","Type":"ContainerStarted","Data":"1d339785d88235baee842ebecf0f8324c3b56aced8f56e4b156c076197f45927"} Feb 19 00:35:06 crc kubenswrapper[5108]: I0219 00:35:06.145525 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:35:06 crc kubenswrapper[5108]: I0219 00:35:06.145844 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:35:06 crc kubenswrapper[5108]: I0219 00:35:06.145887 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:35:06 crc kubenswrapper[5108]: I0219 00:35:06.146496 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:35:06 crc kubenswrapper[5108]: I0219 00:35:06.146560 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" gracePeriod=600 Feb 19 00:35:06 crc kubenswrapper[5108]: I0219 00:35:06.824867 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" exitCode=0 Feb 19 00:35:06 crc kubenswrapper[5108]: I0219 00:35:06.824914 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4"} Feb 19 00:35:06 crc kubenswrapper[5108]: I0219 00:35:06.824993 5108 scope.go:117] "RemoveContainer" containerID="d38f558a933051f6d4612f6c63794db418d969c28d49c059a3a7b5256e907c6f" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.613279 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4"] Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.621258 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.623088 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.624287 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.635261 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4"] Feb 19 00:35:07 crc kubenswrapper[5108]: E0219 00:35:07.717060 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.802381 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/84c95b74-339a-4dce-9f5d-0f35cb34ed71-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.802455 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn7gl\" (UniqueName: \"kubernetes.io/projected/84c95b74-339a-4dce-9f5d-0f35cb34ed71-kube-api-access-bn7gl\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.802492 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/84c95b74-339a-4dce-9f5d-0f35cb34ed71-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.802637 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/84c95b74-339a-4dce-9f5d-0f35cb34ed71-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.837273 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:35:07 crc kubenswrapper[5108]: E0219 00:35:07.837521 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.903787 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bn7gl\" (UniqueName: \"kubernetes.io/projected/84c95b74-339a-4dce-9f5d-0f35cb34ed71-kube-api-access-bn7gl\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.903857 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/84c95b74-339a-4dce-9f5d-0f35cb34ed71-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.903974 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/84c95b74-339a-4dce-9f5d-0f35cb34ed71-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.904038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/84c95b74-339a-4dce-9f5d-0f35cb34ed71-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.905203 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/84c95b74-339a-4dce-9f5d-0f35cb34ed71-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.905557 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/84c95b74-339a-4dce-9f5d-0f35cb34ed71-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.919508 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/84c95b74-339a-4dce-9f5d-0f35cb34ed71-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.919815 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn7gl\" (UniqueName: \"kubernetes.io/projected/84c95b74-339a-4dce-9f5d-0f35cb34ed71-kube-api-access-bn7gl\") pod \"default-cloud1-coll-event-smartgateway-769879f664-h6jb4\" (UID: \"84c95b74-339a-4dce-9f5d-0f35cb34ed71\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:07 crc kubenswrapper[5108]: I0219 00:35:07.941339 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.220174 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4"] Feb 19 00:35:08 crc kubenswrapper[5108]: W0219 00:35:08.231912 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c95b74_339a_4dce_9f5d_0f35cb34ed71.slice/crio-03034f627448dacc825428b1193a3bb4341bcf70b5db7e26942d6a13c3253047 WatchSource:0}: Error finding container 03034f627448dacc825428b1193a3bb4341bcf70b5db7e26942d6a13c3253047: Status 404 returned error can't find the container with id 03034f627448dacc825428b1193a3bb4341bcf70b5db7e26942d6a13c3253047 Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.406804 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p"] Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.421763 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.424636 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p"] Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.425540 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.613763 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.614620 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.614722 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.614819 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbwdd\" (UniqueName: \"kubernetes.io/projected/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-kube-api-access-tbwdd\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.721721 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.721785 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.721824 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tbwdd\" (UniqueName: \"kubernetes.io/projected/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-kube-api-access-tbwdd\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.722043 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.722228 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.722973 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.729556 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.750603 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbwdd\" (UniqueName: \"kubernetes.io/projected/cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d-kube-api-access-tbwdd\") pod \"default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p\" (UID: \"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.753309 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.851586 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" event={"ID":"c1b23465-43aa-4a3e-9617-5137e887360c","Type":"ContainerStarted","Data":"fc667db82dfd84e938a774ec53379bfb96d397402f5d66191438caa3376aefd5"} Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.855883 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" event={"ID":"bf32a308-7483-43c7-80ec-21496776f93c","Type":"ContainerStarted","Data":"3ee6436daa58cb72cf1723f49318a20190350d3307aaf0171300917ad2160d41"} Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.860309 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" event={"ID":"448a5226-c34a-469e-bc72-79158e2b2c92","Type":"ContainerStarted","Data":"d301f1309287252293668e7c4ef9884e829f9e5e0c98f96d451cab6316979a9e"} Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.865029 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" event={"ID":"84c95b74-339a-4dce-9f5d-0f35cb34ed71","Type":"ContainerStarted","Data":"ace04add8471d12eef24dbb1ba7d3ea3da32f6b22dfb83271c1821b43f6024c6"} Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.865077 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" event={"ID":"84c95b74-339a-4dce-9f5d-0f35cb34ed71","Type":"ContainerStarted","Data":"eb30471c5c099333587742461f8b1230a7160e22d81aca93b95305131e81d422"} Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.865089 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" event={"ID":"84c95b74-339a-4dce-9f5d-0f35cb34ed71","Type":"ContainerStarted","Data":"03034f627448dacc825428b1193a3bb4341bcf70b5db7e26942d6a13c3253047"} Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.889504 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" podStartSLOduration=7.691111745 podStartE2EDuration="15.8894902s" podCreationTimestamp="2026-02-19 00:34:53 +0000 UTC" firstStartedPulling="2026-02-19 00:34:59.629328807 +0000 UTC m=+1558.595975105" lastFinishedPulling="2026-02-19 00:35:07.827707242 +0000 UTC m=+1566.794353560" observedRunningTime="2026-02-19 00:35:08.873265167 +0000 UTC m=+1567.839911475" watchObservedRunningTime="2026-02-19 00:35:08.8894902 +0000 UTC m=+1567.856136508" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.903168 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" podStartSLOduration=1.593570336 podStartE2EDuration="1.903131712s" podCreationTimestamp="2026-02-19 00:35:07 +0000 UTC" firstStartedPulling="2026-02-19 00:35:08.234587363 +0000 UTC m=+1567.201233671" lastFinishedPulling="2026-02-19 00:35:08.544148739 +0000 UTC m=+1567.510795047" observedRunningTime="2026-02-19 00:35:08.887921718 +0000 UTC m=+1567.854568026" watchObservedRunningTime="2026-02-19 00:35:08.903131712 +0000 UTC m=+1567.869778020" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.909594 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" podStartSLOduration=5.0168368 podStartE2EDuration="12.909577688s" podCreationTimestamp="2026-02-19 00:34:56 +0000 UTC" firstStartedPulling="2026-02-19 00:34:59.967613666 +0000 UTC m=+1558.934259974" lastFinishedPulling="2026-02-19 00:35:07.860354554 +0000 UTC m=+1566.827000862" observedRunningTime="2026-02-19 00:35:08.907216623 +0000 UTC m=+1567.873862931" watchObservedRunningTime="2026-02-19 00:35:08.909577688 +0000 UTC m=+1567.876223996" Feb 19 00:35:08 crc kubenswrapper[5108]: I0219 00:35:08.931573 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" podStartSLOduration=4.116403032 podStartE2EDuration="8.931558348s" podCreationTimestamp="2026-02-19 00:35:00 +0000 UTC" firstStartedPulling="2026-02-19 00:35:02.984270455 +0000 UTC m=+1561.950916763" lastFinishedPulling="2026-02-19 00:35:07.799425771 +0000 UTC m=+1566.766072079" observedRunningTime="2026-02-19 00:35:08.931346872 +0000 UTC m=+1567.897993190" watchObservedRunningTime="2026-02-19 00:35:08.931558348 +0000 UTC m=+1567.898204656" Feb 19 00:35:09 crc kubenswrapper[5108]: I0219 00:35:09.234362 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p"] Feb 19 00:35:09 crc kubenswrapper[5108]: I0219 00:35:09.875431 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" event={"ID":"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d","Type":"ContainerStarted","Data":"7b5b84de1ead3f50c8df57191ad0085eae173bca5f751359577ed8ab1656b7bf"} Feb 19 00:35:09 crc kubenswrapper[5108]: I0219 00:35:09.875672 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" event={"ID":"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d","Type":"ContainerStarted","Data":"8860a89439a6b6f6a296503a8024efcf6f5460ee490db9b348ede48561aec8e8"} Feb 19 00:35:09 crc kubenswrapper[5108]: I0219 00:35:09.875682 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" event={"ID":"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d","Type":"ContainerStarted","Data":"6cb6c174c62111eac6fbb649e8a0a13b205c76198033088aceff03a91ce6f81b"} Feb 19 00:35:09 crc kubenswrapper[5108]: I0219 00:35:09.897495 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" podStartSLOduration=1.6051990360000001 podStartE2EDuration="1.89747618s" podCreationTimestamp="2026-02-19 00:35:08 +0000 UTC" firstStartedPulling="2026-02-19 00:35:09.237768001 +0000 UTC m=+1568.204414309" lastFinishedPulling="2026-02-19 00:35:09.530045145 +0000 UTC m=+1568.496691453" observedRunningTime="2026-02-19 00:35:09.897096609 +0000 UTC m=+1568.863742917" watchObservedRunningTime="2026-02-19 00:35:09.89747618 +0000 UTC m=+1568.864122498" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.016011 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v2t96"] Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.030903 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v2t96"] Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.031110 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.050110 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwsr\" (UniqueName: \"kubernetes.io/projected/c9ac699b-2298-4231-a931-ef5cba33a4b9-kube-api-access-lxwsr\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.050162 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-catalog-content\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.050204 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-utilities\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.151441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lxwsr\" (UniqueName: \"kubernetes.io/projected/c9ac699b-2298-4231-a931-ef5cba33a4b9-kube-api-access-lxwsr\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.151519 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-catalog-content\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.151562 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-utilities\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.152142 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-utilities\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.152195 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-catalog-content\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.180919 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxwsr\" (UniqueName: \"kubernetes.io/projected/c9ac699b-2298-4231-a931-ef5cba33a4b9-kube-api-access-lxwsr\") pod \"redhat-operators-v2t96\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.350325 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.773223 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v2t96"] Feb 19 00:35:10 crc kubenswrapper[5108]: W0219 00:35:10.787387 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ac699b_2298_4231_a931_ef5cba33a4b9.slice/crio-dd17c4bc5a3f81d71a968273d043b430926d1842cc0f2d1aef552b7160a131fc WatchSource:0}: Error finding container dd17c4bc5a3f81d71a968273d043b430926d1842cc0f2d1aef552b7160a131fc: Status 404 returned error can't find the container with id dd17c4bc5a3f81d71a968273d043b430926d1842cc0f2d1aef552b7160a131fc Feb 19 00:35:10 crc kubenswrapper[5108]: I0219 00:35:10.883982 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2t96" event={"ID":"c9ac699b-2298-4231-a931-ef5cba33a4b9","Type":"ContainerStarted","Data":"dd17c4bc5a3f81d71a968273d043b430926d1842cc0f2d1aef552b7160a131fc"} Feb 19 00:35:11 crc kubenswrapper[5108]: I0219 00:35:11.893307 5108 generic.go:358] "Generic (PLEG): container finished" podID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerID="d50ebf7d799e175f9ad88ed3848119a8ccb003ecbd5fdb06c2a27b5e2c3b886e" exitCode=0 Feb 19 00:35:11 crc kubenswrapper[5108]: I0219 00:35:11.893726 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2t96" event={"ID":"c9ac699b-2298-4231-a931-ef5cba33a4b9","Type":"ContainerDied","Data":"d50ebf7d799e175f9ad88ed3848119a8ccb003ecbd5fdb06c2a27b5e2c3b886e"} Feb 19 00:35:13 crc kubenswrapper[5108]: I0219 00:35:13.692424 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 19 00:35:13 crc kubenswrapper[5108]: I0219 00:35:13.731605 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 19 00:35:13 crc kubenswrapper[5108]: I0219 00:35:13.913105 5108 generic.go:358] "Generic (PLEG): container finished" podID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerID="234a16cd076e46b9d570f42d4649941efd337e16ed99a686e57ebd5c5800cb3d" exitCode=0 Feb 19 00:35:13 crc kubenswrapper[5108]: I0219 00:35:13.913506 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2t96" event={"ID":"c9ac699b-2298-4231-a931-ef5cba33a4b9","Type":"ContainerDied","Data":"234a16cd076e46b9d570f42d4649941efd337e16ed99a686e57ebd5c5800cb3d"} Feb 19 00:35:13 crc kubenswrapper[5108]: I0219 00:35:13.947094 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 19 00:35:14 crc kubenswrapper[5108]: I0219 00:35:14.924833 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2t96" event={"ID":"c9ac699b-2298-4231-a931-ef5cba33a4b9","Type":"ContainerStarted","Data":"763292d6468670af06aa152b9306ecd240d958b9d92d959f2be4c34fdaffa63a"} Feb 19 00:35:14 crc kubenswrapper[5108]: I0219 00:35:14.949822 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v2t96" podStartSLOduration=3.951150511 podStartE2EDuration="4.949801126s" podCreationTimestamp="2026-02-19 00:35:10 +0000 UTC" firstStartedPulling="2026-02-19 00:35:11.89808912 +0000 UTC m=+1570.864735428" lastFinishedPulling="2026-02-19 00:35:12.896739735 +0000 UTC m=+1571.863386043" observedRunningTime="2026-02-19 00:35:14.945592232 +0000 UTC m=+1573.912238540" watchObservedRunningTime="2026-02-19 00:35:14.949801126 +0000 UTC m=+1573.916447444" Feb 19 00:35:19 crc kubenswrapper[5108]: I0219 00:35:19.848026 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:35:19 crc kubenswrapper[5108]: E0219 00:35:19.848867 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:35:20 crc kubenswrapper[5108]: I0219 00:35:20.350915 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:20 crc kubenswrapper[5108]: I0219 00:35:20.350972 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:20 crc kubenswrapper[5108]: I0219 00:35:20.407535 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:20 crc kubenswrapper[5108]: I0219 00:35:20.823352 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-d28h2"] Feb 19 00:35:20 crc kubenswrapper[5108]: I0219 00:35:20.823620 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" podUID="8b307641-5074-4c53-b22a-03b3689c4b0d" containerName="default-interconnect" containerID="cri-o://ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67" gracePeriod=30 Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.017641 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.092313 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v2t96"] Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.902255 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.939528 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zczjg"] Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.940793 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b307641-5074-4c53-b22a-03b3689c4b0d" containerName="default-interconnect" Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.940899 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b307641-5074-4c53-b22a-03b3689c4b0d" containerName="default-interconnect" Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.941091 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b307641-5074-4c53-b22a-03b3689c4b0d" containerName="default-interconnect" Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.979001 5108 generic.go:358] "Generic (PLEG): container finished" podID="c1b23465-43aa-4a3e-9617-5137e887360c" containerID="ff214db2051de0b9b8bc2417fd7fb7b78f44e38dde96ab82db26c83310606cdb" exitCode=0 Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.981370 5108 generic.go:358] "Generic (PLEG): container finished" podID="bf32a308-7483-43c7-80ec-21496776f93c" containerID="c038e62f8bae226d540731a502d6be108ab64845eccb062753b420c19fe80766" exitCode=0 Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.984688 5108 generic.go:358] "Generic (PLEG): container finished" podID="8b307641-5074-4c53-b22a-03b3689c4b0d" containerID="ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67" exitCode=0 Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.988601 5108 generic.go:358] "Generic (PLEG): container finished" podID="448a5226-c34a-469e-bc72-79158e2b2c92" containerID="b8e7f15f5262becc83ba530e0d351c1e03ac86d24b8a3e76c9b4a9e3af6fa4f3" exitCode=0 Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.991129 5108 generic.go:358] "Generic (PLEG): container finished" podID="84c95b74-339a-4dce-9f5d-0f35cb34ed71" containerID="eb30471c5c099333587742461f8b1230a7160e22d81aca93b95305131e81d422" exitCode=0 Feb 19 00:35:21 crc kubenswrapper[5108]: I0219 00:35:21.993823 5108 generic.go:358] "Generic (PLEG): container finished" podID="cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d" containerID="8860a89439a6b6f6a296503a8024efcf6f5460ee490db9b348ede48561aec8e8" exitCode=0 Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.024747 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-ca\") pod \"8b307641-5074-4c53-b22a-03b3689c4b0d\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.024827 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-ca\") pod \"8b307641-5074-4c53-b22a-03b3689c4b0d\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.024856 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfkmk\" (UniqueName: \"kubernetes.io/projected/8b307641-5074-4c53-b22a-03b3689c4b0d-kube-api-access-zfkmk\") pod \"8b307641-5074-4c53-b22a-03b3689c4b0d\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.024997 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-credentials\") pod \"8b307641-5074-4c53-b22a-03b3689c4b0d\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.025015 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-credentials\") pod \"8b307641-5074-4c53-b22a-03b3689c4b0d\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.025049 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-users\") pod \"8b307641-5074-4c53-b22a-03b3689c4b0d\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.025098 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-config\") pod \"8b307641-5074-4c53-b22a-03b3689c4b0d\" (UID: \"8b307641-5074-4c53-b22a-03b3689c4b0d\") " Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.025800 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "8b307641-5074-4c53-b22a-03b3689c4b0d" (UID: "8b307641-5074-4c53-b22a-03b3689c4b0d"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.031124 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "8b307641-5074-4c53-b22a-03b3689c4b0d" (UID: "8b307641-5074-4c53-b22a-03b3689c4b0d"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.031191 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "8b307641-5074-4c53-b22a-03b3689c4b0d" (UID: "8b307641-5074-4c53-b22a-03b3689c4b0d"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.031211 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b307641-5074-4c53-b22a-03b3689c4b0d-kube-api-access-zfkmk" (OuterVolumeSpecName: "kube-api-access-zfkmk") pod "8b307641-5074-4c53-b22a-03b3689c4b0d" (UID: "8b307641-5074-4c53-b22a-03b3689c4b0d"). InnerVolumeSpecName "kube-api-access-zfkmk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.031348 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "8b307641-5074-4c53-b22a-03b3689c4b0d" (UID: "8b307641-5074-4c53-b22a-03b3689c4b0d"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.031709 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "8b307641-5074-4c53-b22a-03b3689c4b0d" (UID: "8b307641-5074-4c53-b22a-03b3689c4b0d"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.032446 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "8b307641-5074-4c53-b22a-03b3689c4b0d" (UID: "8b307641-5074-4c53-b22a-03b3689c4b0d"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.126453 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.126507 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.126521 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.126535 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/8b307641-5074-4c53-b22a-03b3689c4b0d-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.126549 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.126564 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/8b307641-5074-4c53-b22a-03b3689c4b0d-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.126578 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zfkmk\" (UniqueName: \"kubernetes.io/projected/8b307641-5074-4c53-b22a-03b3689c4b0d-kube-api-access-zfkmk\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255089 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zczjg"] Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255382 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" event={"ID":"c1b23465-43aa-4a3e-9617-5137e887360c","Type":"ContainerDied","Data":"ff214db2051de0b9b8bc2417fd7fb7b78f44e38dde96ab82db26c83310606cdb"} Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255474 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" event={"ID":"bf32a308-7483-43c7-80ec-21496776f93c","Type":"ContainerDied","Data":"c038e62f8bae226d540731a502d6be108ab64845eccb062753b420c19fe80766"} Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255560 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" event={"ID":"8b307641-5074-4c53-b22a-03b3689c4b0d","Type":"ContainerDied","Data":"ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67"} Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255277 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255236 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255758 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-d28h2" event={"ID":"8b307641-5074-4c53-b22a-03b3689c4b0d","Type":"ContainerDied","Data":"9063d3370a7eeecb1da239bbc047620eee8a96baa3b698d801ff4a99f89879d2"} Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255826 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" event={"ID":"448a5226-c34a-469e-bc72-79158e2b2c92","Type":"ContainerDied","Data":"b8e7f15f5262becc83ba530e0d351c1e03ac86d24b8a3e76c9b4a9e3af6fa4f3"} Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255855 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" event={"ID":"84c95b74-339a-4dce-9f5d-0f35cb34ed71","Type":"ContainerDied","Data":"eb30471c5c099333587742461f8b1230a7160e22d81aca93b95305131e81d422"} Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255899 5108 scope.go:117] "RemoveContainer" containerID="ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.255915 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" event={"ID":"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d","Type":"ContainerDied","Data":"8860a89439a6b6f6a296503a8024efcf6f5460ee490db9b348ede48561aec8e8"} Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.257481 5108 scope.go:117] "RemoveContainer" containerID="b8e7f15f5262becc83ba530e0d351c1e03ac86d24b8a3e76c9b4a9e3af6fa4f3" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.257641 5108 scope.go:117] "RemoveContainer" containerID="8860a89439a6b6f6a296503a8024efcf6f5460ee490db9b348ede48561aec8e8" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.257715 5108 scope.go:117] "RemoveContainer" containerID="c038e62f8bae226d540731a502d6be108ab64845eccb062753b420c19fe80766" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.257778 5108 scope.go:117] "RemoveContainer" containerID="eb30471c5c099333587742461f8b1230a7160e22d81aca93b95305131e81d422" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.259158 5108 scope.go:117] "RemoveContainer" containerID="ff214db2051de0b9b8bc2417fd7fb7b78f44e38dde96ab82db26c83310606cdb" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.358678 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.358719 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.358741 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-sasl-users\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.358847 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/def034e8-a7cb-408e-bda8-63097924e980-sasl-config\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.358901 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.358926 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.358996 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhj6p\" (UniqueName: \"kubernetes.io/projected/def034e8-a7cb-408e-bda8-63097924e980-kube-api-access-zhj6p\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.459983 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.460336 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.460366 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-sasl-users\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.460439 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/def034e8-a7cb-408e-bda8-63097924e980-sasl-config\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.460487 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.460510 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.460547 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhj6p\" (UniqueName: \"kubernetes.io/projected/def034e8-a7cb-408e-bda8-63097924e980-kube-api-access-zhj6p\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.461729 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/def034e8-a7cb-408e-bda8-63097924e980-sasl-config\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.461781 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-d28h2"] Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.469902 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-d28h2"] Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.472118 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.472484 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-sasl-users\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.472637 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.474886 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.476477 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/def034e8-a7cb-408e-bda8-63097924e980-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.476965 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhj6p\" (UniqueName: \"kubernetes.io/projected/def034e8-a7cb-408e-bda8-63097924e980-kube-api-access-zhj6p\") pod \"default-interconnect-55bf8d5cb-zczjg\" (UID: \"def034e8-a7cb-408e-bda8-63097924e980\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:22 crc kubenswrapper[5108]: I0219 00:35:22.584539 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" Feb 19 00:35:23 crc kubenswrapper[5108]: I0219 00:35:23.000678 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v2t96" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerName="registry-server" containerID="cri-o://763292d6468670af06aa152b9306ecd240d958b9d92d959f2be4c34fdaffa63a" gracePeriod=2 Feb 19 00:35:23 crc kubenswrapper[5108]: I0219 00:35:23.790110 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.236190 5108 scope.go:117] "RemoveContainer" containerID="ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67" Feb 19 00:35:24 crc kubenswrapper[5108]: E0219 00:35:24.238259 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67\": container with ID starting with ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67 not found: ID does not exist" containerID="ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.238306 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67"} err="failed to get container status \"ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67\": rpc error: code = NotFound desc = could not find container \"ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67\": container with ID starting with ea16bd21141c3b30e27f59ec6a1fcfbf1e1ce3fa06fa25744773be3c8dd72e67 not found: ID does not exist" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.824339 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.824376 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zczjg"] Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.824536 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.837071 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.837340 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.844794 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b307641-5074-4c53-b22a-03b3689c4b0d" path="/var/lib/kubelet/pods/8b307641-5074-4c53-b22a-03b3689c4b0d/volumes" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.895241 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.895295 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-qdr-test-config\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.895369 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4zwq\" (UniqueName: \"kubernetes.io/projected/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-kube-api-access-j4zwq\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.996935 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.997021 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-qdr-test-config\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.997096 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j4zwq\" (UniqueName: \"kubernetes.io/projected/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-kube-api-access-j4zwq\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:24 crc kubenswrapper[5108]: I0219 00:35:24.997812 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-qdr-test-config\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:25 crc kubenswrapper[5108]: I0219 00:35:25.009384 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:25 crc kubenswrapper[5108]: I0219 00:35:25.014342 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4zwq\" (UniqueName: \"kubernetes.io/projected/3a3f47d4-bad6-4747-922f-0df47e8fa0c6-kube-api-access-j4zwq\") pod \"qdr-test\" (UID: \"3a3f47d4-bad6-4747-922f-0df47e8fa0c6\") " pod="service-telemetry/qdr-test" Feb 19 00:35:25 crc kubenswrapper[5108]: I0219 00:35:25.019773 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" event={"ID":"def034e8-a7cb-408e-bda8-63097924e980","Type":"ContainerStarted","Data":"e0a4755478da6ac6f3e1f7e3ad637b2433b64965d1b43e587a6784cbaea15abc"} Feb 19 00:35:25 crc kubenswrapper[5108]: I0219 00:35:25.151873 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 19 00:35:25 crc kubenswrapper[5108]: I0219 00:35:25.585078 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:35:25 crc kubenswrapper[5108]: W0219 00:35:25.589221 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a3f47d4_bad6_4747_922f_0df47e8fa0c6.slice/crio-750faa6dd95237c56cdb657392535f74f996a22bf3f5043d7fb6eb17e0b4559e WatchSource:0}: Error finding container 750faa6dd95237c56cdb657392535f74f996a22bf3f5043d7fb6eb17e0b4559e: Status 404 returned error can't find the container with id 750faa6dd95237c56cdb657392535f74f996a22bf3f5043d7fb6eb17e0b4559e Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.031008 5108 generic.go:358] "Generic (PLEG): container finished" podID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerID="763292d6468670af06aa152b9306ecd240d958b9d92d959f2be4c34fdaffa63a" exitCode=0 Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.031448 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2t96" event={"ID":"c9ac699b-2298-4231-a931-ef5cba33a4b9","Type":"ContainerDied","Data":"763292d6468670af06aa152b9306ecd240d958b9d92d959f2be4c34fdaffa63a"} Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.032904 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"3a3f47d4-bad6-4747-922f-0df47e8fa0c6","Type":"ContainerStarted","Data":"750faa6dd95237c56cdb657392535f74f996a22bf3f5043d7fb6eb17e0b4559e"} Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.832558 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.927048 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxwsr\" (UniqueName: \"kubernetes.io/projected/c9ac699b-2298-4231-a931-ef5cba33a4b9-kube-api-access-lxwsr\") pod \"c9ac699b-2298-4231-a931-ef5cba33a4b9\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.927319 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-catalog-content\") pod \"c9ac699b-2298-4231-a931-ef5cba33a4b9\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.933428 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ac699b-2298-4231-a931-ef5cba33a4b9-kube-api-access-lxwsr" (OuterVolumeSpecName: "kube-api-access-lxwsr") pod "c9ac699b-2298-4231-a931-ef5cba33a4b9" (UID: "c9ac699b-2298-4231-a931-ef5cba33a4b9"). InnerVolumeSpecName "kube-api-access-lxwsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.938719 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-utilities\") pod \"c9ac699b-2298-4231-a931-ef5cba33a4b9\" (UID: \"c9ac699b-2298-4231-a931-ef5cba33a4b9\") " Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.939900 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-utilities" (OuterVolumeSpecName: "utilities") pod "c9ac699b-2298-4231-a931-ef5cba33a4b9" (UID: "c9ac699b-2298-4231-a931-ef5cba33a4b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.940109 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:26 crc kubenswrapper[5108]: I0219 00:35:26.940129 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lxwsr\" (UniqueName: \"kubernetes.io/projected/c9ac699b-2298-4231-a931-ef5cba33a4b9-kube-api-access-lxwsr\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.034554 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9ac699b-2298-4231-a931-ef5cba33a4b9" (UID: "c9ac699b-2298-4231-a931-ef5cba33a4b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.040975 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ac699b-2298-4231-a931-ef5cba33a4b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.045147 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2t96" Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.045169 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2t96" event={"ID":"c9ac699b-2298-4231-a931-ef5cba33a4b9","Type":"ContainerDied","Data":"dd17c4bc5a3f81d71a968273d043b430926d1842cc0f2d1aef552b7160a131fc"} Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.045232 5108 scope.go:117] "RemoveContainer" containerID="763292d6468670af06aa152b9306ecd240d958b9d92d959f2be4c34fdaffa63a" Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.047008 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" event={"ID":"def034e8-a7cb-408e-bda8-63097924e980","Type":"ContainerStarted","Data":"1144892b36819a4731adf73e9e9ddffa47c89f151426dce4496d3ee3cc9f3eda"} Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.063072 5108 scope.go:117] "RemoveContainer" containerID="234a16cd076e46b9d570f42d4649941efd337e16ed99a686e57ebd5c5800cb3d" Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.100298 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-zczjg" podStartSLOduration=7.100269443 podStartE2EDuration="7.100269443s" podCreationTimestamp="2026-02-19 00:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:35:27.073515784 +0000 UTC m=+1586.040162082" watchObservedRunningTime="2026-02-19 00:35:27.100269443 +0000 UTC m=+1586.066915751" Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.101217 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v2t96"] Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.103688 5108 scope.go:117] "RemoveContainer" containerID="d50ebf7d799e175f9ad88ed3848119a8ccb003ecbd5fdb06c2a27b5e2c3b886e" Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.110503 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v2t96"] Feb 19 00:35:27 crc kubenswrapper[5108]: I0219 00:35:27.856921 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" path="/var/lib/kubelet/pods/c9ac699b-2298-4231-a931-ef5cba33a4b9/volumes" Feb 19 00:35:28 crc kubenswrapper[5108]: I0219 00:35:28.057542 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" event={"ID":"448a5226-c34a-469e-bc72-79158e2b2c92","Type":"ContainerStarted","Data":"53c5544946db0aa121b82bcf4a913706300848a05f1b2ec2f15b3442ab5a8bb9"} Feb 19 00:35:28 crc kubenswrapper[5108]: I0219 00:35:28.060464 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" event={"ID":"84c95b74-339a-4dce-9f5d-0f35cb34ed71","Type":"ContainerStarted","Data":"f871f868706b2cf8dd809575f1f9caa8cf66a2d1a0b3ae04b1fceaa199391ce8"} Feb 19 00:35:28 crc kubenswrapper[5108]: I0219 00:35:28.063232 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" event={"ID":"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d","Type":"ContainerStarted","Data":"7af78db8424764086bc23383a5d618071e1705027d3fa76ab2c8ca2094ed0f1e"} Feb 19 00:35:28 crc kubenswrapper[5108]: I0219 00:35:28.090452 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" event={"ID":"c1b23465-43aa-4a3e-9617-5137e887360c","Type":"ContainerStarted","Data":"34390954cc16551e16d95725301cdf87abc33ce1d45479d27ad0cb00d9b6d2c3"} Feb 19 00:35:28 crc kubenswrapper[5108]: I0219 00:35:28.098513 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" event={"ID":"bf32a308-7483-43c7-80ec-21496776f93c","Type":"ContainerStarted","Data":"e1bddd3e2c8a05bd47ed3c075c9858eeb6e7ca9cee1ef5ff9fc7a09950562abd"} Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.106762 5108 generic.go:358] "Generic (PLEG): container finished" podID="cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d" containerID="7af78db8424764086bc23383a5d618071e1705027d3fa76ab2c8ca2094ed0f1e" exitCode=0 Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.106979 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" event={"ID":"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d","Type":"ContainerDied","Data":"7af78db8424764086bc23383a5d618071e1705027d3fa76ab2c8ca2094ed0f1e"} Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.107155 5108 scope.go:117] "RemoveContainer" containerID="8860a89439a6b6f6a296503a8024efcf6f5460ee490db9b348ede48561aec8e8" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.107649 5108 scope.go:117] "RemoveContainer" containerID="7af78db8424764086bc23383a5d618071e1705027d3fa76ab2c8ca2094ed0f1e" Feb 19 00:35:29 crc kubenswrapper[5108]: E0219 00:35:29.107919 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p_service-telemetry(cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" podUID="cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.109623 5108 generic.go:358] "Generic (PLEG): container finished" podID="c1b23465-43aa-4a3e-9617-5137e887360c" containerID="34390954cc16551e16d95725301cdf87abc33ce1d45479d27ad0cb00d9b6d2c3" exitCode=0 Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.109682 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" event={"ID":"c1b23465-43aa-4a3e-9617-5137e887360c","Type":"ContainerDied","Data":"34390954cc16551e16d95725301cdf87abc33ce1d45479d27ad0cb00d9b6d2c3"} Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.110005 5108 scope.go:117] "RemoveContainer" containerID="34390954cc16551e16d95725301cdf87abc33ce1d45479d27ad0cb00d9b6d2c3" Feb 19 00:35:29 crc kubenswrapper[5108]: E0219 00:35:29.110175 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7_service-telemetry(c1b23465-43aa-4a3e-9617-5137e887360c)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" podUID="c1b23465-43aa-4a3e-9617-5137e887360c" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.117146 5108 generic.go:358] "Generic (PLEG): container finished" podID="bf32a308-7483-43c7-80ec-21496776f93c" containerID="e1bddd3e2c8a05bd47ed3c075c9858eeb6e7ca9cee1ef5ff9fc7a09950562abd" exitCode=0 Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.117232 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" event={"ID":"bf32a308-7483-43c7-80ec-21496776f93c","Type":"ContainerDied","Data":"e1bddd3e2c8a05bd47ed3c075c9858eeb6e7ca9cee1ef5ff9fc7a09950562abd"} Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.117792 5108 scope.go:117] "RemoveContainer" containerID="e1bddd3e2c8a05bd47ed3c075c9858eeb6e7ca9cee1ef5ff9fc7a09950562abd" Feb 19 00:35:29 crc kubenswrapper[5108]: E0219 00:35:29.118085 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9_service-telemetry(bf32a308-7483-43c7-80ec-21496776f93c)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" podUID="bf32a308-7483-43c7-80ec-21496776f93c" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.120545 5108 generic.go:358] "Generic (PLEG): container finished" podID="448a5226-c34a-469e-bc72-79158e2b2c92" containerID="53c5544946db0aa121b82bcf4a913706300848a05f1b2ec2f15b3442ab5a8bb9" exitCode=0 Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.120646 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" event={"ID":"448a5226-c34a-469e-bc72-79158e2b2c92","Type":"ContainerDied","Data":"53c5544946db0aa121b82bcf4a913706300848a05f1b2ec2f15b3442ab5a8bb9"} Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.121159 5108 scope.go:117] "RemoveContainer" containerID="53c5544946db0aa121b82bcf4a913706300848a05f1b2ec2f15b3442ab5a8bb9" Feb 19 00:35:29 crc kubenswrapper[5108]: E0219 00:35:29.121400 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr_service-telemetry(448a5226-c34a-469e-bc72-79158e2b2c92)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" podUID="448a5226-c34a-469e-bc72-79158e2b2c92" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.131419 5108 generic.go:358] "Generic (PLEG): container finished" podID="84c95b74-339a-4dce-9f5d-0f35cb34ed71" containerID="f871f868706b2cf8dd809575f1f9caa8cf66a2d1a0b3ae04b1fceaa199391ce8" exitCode=0 Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.131499 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" event={"ID":"84c95b74-339a-4dce-9f5d-0f35cb34ed71","Type":"ContainerDied","Data":"f871f868706b2cf8dd809575f1f9caa8cf66a2d1a0b3ae04b1fceaa199391ce8"} Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.131955 5108 scope.go:117] "RemoveContainer" containerID="f871f868706b2cf8dd809575f1f9caa8cf66a2d1a0b3ae04b1fceaa199391ce8" Feb 19 00:35:29 crc kubenswrapper[5108]: E0219 00:35:29.152435 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-769879f664-h6jb4_service-telemetry(84c95b74-339a-4dce-9f5d-0f35cb34ed71)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" podUID="84c95b74-339a-4dce-9f5d-0f35cb34ed71" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.251085 5108 scope.go:117] "RemoveContainer" containerID="ff214db2051de0b9b8bc2417fd7fb7b78f44e38dde96ab82db26c83310606cdb" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.321214 5108 scope.go:117] "RemoveContainer" containerID="c038e62f8bae226d540731a502d6be108ab64845eccb062753b420c19fe80766" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.366495 5108 scope.go:117] "RemoveContainer" containerID="b8e7f15f5262becc83ba530e0d351c1e03ac86d24b8a3e76c9b4a9e3af6fa4f3" Feb 19 00:35:29 crc kubenswrapper[5108]: I0219 00:35:29.406046 5108 scope.go:117] "RemoveContainer" containerID="eb30471c5c099333587742461f8b1230a7160e22d81aca93b95305131e81d422" Feb 19 00:35:34 crc kubenswrapper[5108]: I0219 00:35:34.848141 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:35:34 crc kubenswrapper[5108]: E0219 00:35:34.848620 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.193859 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"3a3f47d4-bad6-4747-922f-0df47e8fa0c6","Type":"ContainerStarted","Data":"2d03d7c81db60d073876d544a2bc533be7e972c4df84405804ce6c1deb499d6b"} Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.215491 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=3.545272681 podStartE2EDuration="13.215470772s" podCreationTimestamp="2026-02-19 00:35:23 +0000 UTC" firstStartedPulling="2026-02-19 00:35:25.590899315 +0000 UTC m=+1584.557545623" lastFinishedPulling="2026-02-19 00:35:35.261097406 +0000 UTC m=+1594.227743714" observedRunningTime="2026-02-19 00:35:36.208313077 +0000 UTC m=+1595.174959395" watchObservedRunningTime="2026-02-19 00:35:36.215470772 +0000 UTC m=+1595.182117080" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.537315 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vzc7m"] Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.538238 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerName="registry-server" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.538260 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerName="registry-server" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.538288 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerName="extract-utilities" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.538296 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerName="extract-utilities" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.538336 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerName="extract-content" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.538346 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerName="extract-content" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.538503 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c9ac699b-2298-4231-a931-ef5cba33a4b9" containerName="registry-server" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.553365 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vzc7m"] Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.553535 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.557823 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.558140 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.558352 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.559457 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.559646 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.559730 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.694531 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-healthcheck-log\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.694583 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.694622 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-config\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.694656 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz2wf\" (UniqueName: \"kubernetes.io/projected/e5406e9f-1fb2-4a07-9a34-411879196c27-kube-api-access-rz2wf\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.694680 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.694722 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-sensubility-config\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.694764 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.796268 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-healthcheck-log\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.796319 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.796345 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-config\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.796406 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rz2wf\" (UniqueName: \"kubernetes.io/projected/e5406e9f-1fb2-4a07-9a34-411879196c27-kube-api-access-rz2wf\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.796425 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.796920 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-sensubility-config\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.797050 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.797601 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-healthcheck-log\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.797658 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.797726 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.797887 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-config\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.798073 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-sensubility-config\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.798111 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.817052 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz2wf\" (UniqueName: \"kubernetes.io/projected/e5406e9f-1fb2-4a07-9a34-411879196c27-kube-api-access-rz2wf\") pod \"stf-smoketest-smoke1-vzc7m\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.844537 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.852959 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.853074 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:35:36 crc kubenswrapper[5108]: I0219 00:35:36.883502 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:35:37 crc kubenswrapper[5108]: I0219 00:35:37.001463 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8c8\" (UniqueName: \"kubernetes.io/projected/a4b3dc71-3653-434f-97fd-d3b7c0b2741e-kube-api-access-gv8c8\") pod \"curl\" (UID: \"a4b3dc71-3653-434f-97fd-d3b7c0b2741e\") " pod="service-telemetry/curl" Feb 19 00:35:37 crc kubenswrapper[5108]: I0219 00:35:37.103141 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8c8\" (UniqueName: \"kubernetes.io/projected/a4b3dc71-3653-434f-97fd-d3b7c0b2741e-kube-api-access-gv8c8\") pod \"curl\" (UID: \"a4b3dc71-3653-434f-97fd-d3b7c0b2741e\") " pod="service-telemetry/curl" Feb 19 00:35:37 crc kubenswrapper[5108]: I0219 00:35:37.138774 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8c8\" (UniqueName: \"kubernetes.io/projected/a4b3dc71-3653-434f-97fd-d3b7c0b2741e-kube-api-access-gv8c8\") pod \"curl\" (UID: \"a4b3dc71-3653-434f-97fd-d3b7c0b2741e\") " pod="service-telemetry/curl" Feb 19 00:35:37 crc kubenswrapper[5108]: I0219 00:35:37.171319 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:35:37 crc kubenswrapper[5108]: I0219 00:35:37.340052 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vzc7m"] Feb 19 00:35:37 crc kubenswrapper[5108]: W0219 00:35:37.342110 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5406e9f_1fb2_4a07_9a34_411879196c27.slice/crio-fcadbda2d4e16900773d918709cdf7febbb51b2539bef4f9895ccdefeb9c3da8 WatchSource:0}: Error finding container fcadbda2d4e16900773d918709cdf7febbb51b2539bef4f9895ccdefeb9c3da8: Status 404 returned error can't find the container with id fcadbda2d4e16900773d918709cdf7febbb51b2539bef4f9895ccdefeb9c3da8 Feb 19 00:35:37 crc kubenswrapper[5108]: I0219 00:35:37.381425 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 19 00:35:38 crc kubenswrapper[5108]: I0219 00:35:38.210805 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a4b3dc71-3653-434f-97fd-d3b7c0b2741e","Type":"ContainerStarted","Data":"bcfee48dda80cd026f523f71ff9659b76d2cc1364847391a109caafac4efabd2"} Feb 19 00:35:38 crc kubenswrapper[5108]: I0219 00:35:38.212786 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" event={"ID":"e5406e9f-1fb2-4a07-9a34-411879196c27","Type":"ContainerStarted","Data":"fcadbda2d4e16900773d918709cdf7febbb51b2539bef4f9895ccdefeb9c3da8"} Feb 19 00:35:39 crc kubenswrapper[5108]: I0219 00:35:39.223474 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a4b3dc71-3653-434f-97fd-d3b7c0b2741e","Type":"ContainerStarted","Data":"22129078efc9eaf011f0804dfc066f52f51e89dd6ca196ae346a83669b836d04"} Feb 19 00:35:39 crc kubenswrapper[5108]: I0219 00:35:39.244372 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/curl" podStartSLOduration=1.648981702 podStartE2EDuration="3.244352376s" podCreationTimestamp="2026-02-19 00:35:36 +0000 UTC" firstStartedPulling="2026-02-19 00:35:37.38976965 +0000 UTC m=+1596.356415948" lastFinishedPulling="2026-02-19 00:35:38.985140324 +0000 UTC m=+1597.951786622" observedRunningTime="2026-02-19 00:35:39.235916966 +0000 UTC m=+1598.202563274" watchObservedRunningTime="2026-02-19 00:35:39.244352376 +0000 UTC m=+1598.210998714" Feb 19 00:35:40 crc kubenswrapper[5108]: I0219 00:35:40.234853 5108 generic.go:358] "Generic (PLEG): container finished" podID="a4b3dc71-3653-434f-97fd-d3b7c0b2741e" containerID="22129078efc9eaf011f0804dfc066f52f51e89dd6ca196ae346a83669b836d04" exitCode=0 Feb 19 00:35:40 crc kubenswrapper[5108]: I0219 00:35:40.234919 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a4b3dc71-3653-434f-97fd-d3b7c0b2741e","Type":"ContainerDied","Data":"22129078efc9eaf011f0804dfc066f52f51e89dd6ca196ae346a83669b836d04"} Feb 19 00:35:40 crc kubenswrapper[5108]: I0219 00:35:40.848578 5108 scope.go:117] "RemoveContainer" containerID="7af78db8424764086bc23383a5d618071e1705027d3fa76ab2c8ca2094ed0f1e" Feb 19 00:35:41 crc kubenswrapper[5108]: I0219 00:35:41.855189 5108 scope.go:117] "RemoveContainer" containerID="f871f868706b2cf8dd809575f1f9caa8cf66a2d1a0b3ae04b1fceaa199391ce8" Feb 19 00:35:42 crc kubenswrapper[5108]: I0219 00:35:42.851546 5108 scope.go:117] "RemoveContainer" containerID="34390954cc16551e16d95725301cdf87abc33ce1d45479d27ad0cb00d9b6d2c3" Feb 19 00:35:42 crc kubenswrapper[5108]: I0219 00:35:42.851992 5108 scope.go:117] "RemoveContainer" containerID="53c5544946db0aa121b82bcf4a913706300848a05f1b2ec2f15b3442ab5a8bb9" Feb 19 00:35:43 crc kubenswrapper[5108]: I0219 00:35:43.851745 5108 scope.go:117] "RemoveContainer" containerID="e1bddd3e2c8a05bd47ed3c075c9858eeb6e7ca9cee1ef5ff9fc7a09950562abd" Feb 19 00:35:45 crc kubenswrapper[5108]: I0219 00:35:45.852428 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:35:45 crc kubenswrapper[5108]: E0219 00:35:45.852664 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:35:45 crc kubenswrapper[5108]: I0219 00:35:45.866993 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:35:45 crc kubenswrapper[5108]: I0219 00:35:45.948353 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8c8\" (UniqueName: \"kubernetes.io/projected/a4b3dc71-3653-434f-97fd-d3b7c0b2741e-kube-api-access-gv8c8\") pod \"a4b3dc71-3653-434f-97fd-d3b7c0b2741e\" (UID: \"a4b3dc71-3653-434f-97fd-d3b7c0b2741e\") " Feb 19 00:35:45 crc kubenswrapper[5108]: I0219 00:35:45.957711 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b3dc71-3653-434f-97fd-d3b7c0b2741e-kube-api-access-gv8c8" (OuterVolumeSpecName: "kube-api-access-gv8c8") pod "a4b3dc71-3653-434f-97fd-d3b7c0b2741e" (UID: "a4b3dc71-3653-434f-97fd-d3b7c0b2741e"). InnerVolumeSpecName "kube-api-access-gv8c8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:35:46 crc kubenswrapper[5108]: I0219 00:35:46.009526 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_a4b3dc71-3653-434f-97fd-d3b7c0b2741e/curl/0.log" Feb 19 00:35:46 crc kubenswrapper[5108]: I0219 00:35:46.050992 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gv8c8\" (UniqueName: \"kubernetes.io/projected/a4b3dc71-3653-434f-97fd-d3b7c0b2741e-kube-api-access-gv8c8\") on node \"crc\" DevicePath \"\"" Feb 19 00:35:46 crc kubenswrapper[5108]: I0219 00:35:46.258131 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-dw958_21868270-1946-4c6b-9aec-fac51ff7301b/prometheus-webhook-snmp/0.log" Feb 19 00:35:46 crc kubenswrapper[5108]: I0219 00:35:46.292450 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a4b3dc71-3653-434f-97fd-d3b7c0b2741e","Type":"ContainerDied","Data":"bcfee48dda80cd026f523f71ff9659b76d2cc1364847391a109caafac4efabd2"} Feb 19 00:35:46 crc kubenswrapper[5108]: I0219 00:35:46.292494 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcfee48dda80cd026f523f71ff9659b76d2cc1364847391a109caafac4efabd2" Feb 19 00:35:46 crc kubenswrapper[5108]: I0219 00:35:46.292576 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:35:48 crc kubenswrapper[5108]: I0219 00:35:48.312297 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7" event={"ID":"c1b23465-43aa-4a3e-9617-5137e887360c","Type":"ContainerStarted","Data":"d7a829ad35047d0ff9988b8a4f071cbac4ad9e14a17ac0c9753b58e3ef8eb37c"} Feb 19 00:35:48 crc kubenswrapper[5108]: I0219 00:35:48.316223 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9" event={"ID":"bf32a308-7483-43c7-80ec-21496776f93c","Type":"ContainerStarted","Data":"8f07d64704fbc3f95d96b0a3c95c51241199ff60359103855ca5dbf2505a6186"} Feb 19 00:35:48 crc kubenswrapper[5108]: I0219 00:35:48.319566 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr" event={"ID":"448a5226-c34a-469e-bc72-79158e2b2c92","Type":"ContainerStarted","Data":"6a8166e06dc4c05302681338f2b0ff9e948e49c4ec940ad62f1cf7d03d159096"} Feb 19 00:35:48 crc kubenswrapper[5108]: I0219 00:35:48.322533 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-769879f664-h6jb4" event={"ID":"84c95b74-339a-4dce-9f5d-0f35cb34ed71","Type":"ContainerStarted","Data":"a76271afd7927ff76bf4cf74e863dba2e1d48b763d3f0c9e64a26dad990a26f9"} Feb 19 00:35:48 crc kubenswrapper[5108]: I0219 00:35:48.325054 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" event={"ID":"e5406e9f-1fb2-4a07-9a34-411879196c27","Type":"ContainerStarted","Data":"5e918c963509cbb407076ad77f3bb14b1495588e11c1155e0333cbec991db72d"} Feb 19 00:35:48 crc kubenswrapper[5108]: I0219 00:35:48.327324 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p" event={"ID":"cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d","Type":"ContainerStarted","Data":"e659336b369f8a6bd4e7e758fb725723493ec5d1dfe1d0f3579968a1eab0676c"} Feb 19 00:35:54 crc kubenswrapper[5108]: I0219 00:35:54.377274 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" event={"ID":"e5406e9f-1fb2-4a07-9a34-411879196c27","Type":"ContainerStarted","Data":"505071238ceb7628f00e23fcc0642f8dbe428f046bb17135af1f86d1161290e7"} Feb 19 00:35:54 crc kubenswrapper[5108]: I0219 00:35:54.398365 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" podStartSLOduration=1.587446173 podStartE2EDuration="18.398347575s" podCreationTimestamp="2026-02-19 00:35:36 +0000 UTC" firstStartedPulling="2026-02-19 00:35:37.346691435 +0000 UTC m=+1596.313337743" lastFinishedPulling="2026-02-19 00:35:54.157592847 +0000 UTC m=+1613.124239145" observedRunningTime="2026-02-19 00:35:54.395169068 +0000 UTC m=+1613.361815386" watchObservedRunningTime="2026-02-19 00:35:54.398347575 +0000 UTC m=+1613.364993883" Feb 19 00:35:59 crc kubenswrapper[5108]: I0219 00:35:59.848515 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:35:59 crc kubenswrapper[5108]: E0219 00:35:59.848860 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:36:00 crc kubenswrapper[5108]: I0219 00:36:00.143654 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524356-wkdgx"] Feb 19 00:36:00 crc kubenswrapper[5108]: I0219 00:36:00.144522 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4b3dc71-3653-434f-97fd-d3b7c0b2741e" containerName="curl" Feb 19 00:36:00 crc kubenswrapper[5108]: I0219 00:36:00.144538 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b3dc71-3653-434f-97fd-d3b7c0b2741e" containerName="curl" Feb 19 00:36:00 crc kubenswrapper[5108]: I0219 00:36:00.144748 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4b3dc71-3653-434f-97fd-d3b7c0b2741e" containerName="curl" Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.322538 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524356-wkdgx"] Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.322610 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524356-wkdgx" Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.325078 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.325241 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.325769 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.362493 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsd9f\" (UniqueName: \"kubernetes.io/projected/01f2fdf3-00d2-4230-8643-56af472eab11-kube-api-access-jsd9f\") pod \"auto-csr-approver-29524356-wkdgx\" (UID: \"01f2fdf3-00d2-4230-8643-56af472eab11\") " pod="openshift-infra/auto-csr-approver-29524356-wkdgx" Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.464051 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jsd9f\" (UniqueName: \"kubernetes.io/projected/01f2fdf3-00d2-4230-8643-56af472eab11-kube-api-access-jsd9f\") pod \"auto-csr-approver-29524356-wkdgx\" (UID: \"01f2fdf3-00d2-4230-8643-56af472eab11\") " pod="openshift-infra/auto-csr-approver-29524356-wkdgx" Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.487910 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsd9f\" (UniqueName: \"kubernetes.io/projected/01f2fdf3-00d2-4230-8643-56af472eab11-kube-api-access-jsd9f\") pod \"auto-csr-approver-29524356-wkdgx\" (UID: \"01f2fdf3-00d2-4230-8643-56af472eab11\") " pod="openshift-infra/auto-csr-approver-29524356-wkdgx" Feb 19 00:36:01 crc kubenswrapper[5108]: I0219 00:36:01.643459 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524356-wkdgx" Feb 19 00:36:02 crc kubenswrapper[5108]: W0219 00:36:02.093348 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2fdf3_00d2_4230_8643_56af472eab11.slice/crio-2720267b34f7bcbe2c1f9b91052c6b90217adfd18470c07b1737b40101cb5cd8 WatchSource:0}: Error finding container 2720267b34f7bcbe2c1f9b91052c6b90217adfd18470c07b1737b40101cb5cd8: Status 404 returned error can't find the container with id 2720267b34f7bcbe2c1f9b91052c6b90217adfd18470c07b1737b40101cb5cd8 Feb 19 00:36:02 crc kubenswrapper[5108]: I0219 00:36:02.093898 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524356-wkdgx"] Feb 19 00:36:03 crc kubenswrapper[5108]: I0219 00:36:03.022878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524356-wkdgx" event={"ID":"01f2fdf3-00d2-4230-8643-56af472eab11","Type":"ContainerStarted","Data":"2720267b34f7bcbe2c1f9b91052c6b90217adfd18470c07b1737b40101cb5cd8"} Feb 19 00:36:05 crc kubenswrapper[5108]: I0219 00:36:05.039209 5108 generic.go:358] "Generic (PLEG): container finished" podID="01f2fdf3-00d2-4230-8643-56af472eab11" containerID="30c25d428721befe30a445b0da52df970afd616793d06e890d6aa9d3c2ad6288" exitCode=0 Feb 19 00:36:05 crc kubenswrapper[5108]: I0219 00:36:05.039289 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524356-wkdgx" event={"ID":"01f2fdf3-00d2-4230-8643-56af472eab11","Type":"ContainerDied","Data":"30c25d428721befe30a445b0da52df970afd616793d06e890d6aa9d3c2ad6288"} Feb 19 00:36:06 crc kubenswrapper[5108]: I0219 00:36:06.305670 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524356-wkdgx" Feb 19 00:36:06 crc kubenswrapper[5108]: I0219 00:36:06.330774 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsd9f\" (UniqueName: \"kubernetes.io/projected/01f2fdf3-00d2-4230-8643-56af472eab11-kube-api-access-jsd9f\") pod \"01f2fdf3-00d2-4230-8643-56af472eab11\" (UID: \"01f2fdf3-00d2-4230-8643-56af472eab11\") " Feb 19 00:36:06 crc kubenswrapper[5108]: I0219 00:36:06.339266 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f2fdf3-00d2-4230-8643-56af472eab11-kube-api-access-jsd9f" (OuterVolumeSpecName: "kube-api-access-jsd9f") pod "01f2fdf3-00d2-4230-8643-56af472eab11" (UID: "01f2fdf3-00d2-4230-8643-56af472eab11"). InnerVolumeSpecName "kube-api-access-jsd9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:36:06 crc kubenswrapper[5108]: I0219 00:36:06.432656 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jsd9f\" (UniqueName: \"kubernetes.io/projected/01f2fdf3-00d2-4230-8643-56af472eab11-kube-api-access-jsd9f\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:07 crc kubenswrapper[5108]: I0219 00:36:07.056773 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524356-wkdgx" event={"ID":"01f2fdf3-00d2-4230-8643-56af472eab11","Type":"ContainerDied","Data":"2720267b34f7bcbe2c1f9b91052c6b90217adfd18470c07b1737b40101cb5cd8"} Feb 19 00:36:07 crc kubenswrapper[5108]: I0219 00:36:07.056828 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2720267b34f7bcbe2c1f9b91052c6b90217adfd18470c07b1737b40101cb5cd8" Feb 19 00:36:07 crc kubenswrapper[5108]: I0219 00:36:07.056791 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524356-wkdgx" Feb 19 00:36:07 crc kubenswrapper[5108]: I0219 00:36:07.374102 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524350-gs8s6"] Feb 19 00:36:07 crc kubenswrapper[5108]: I0219 00:36:07.381281 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524350-gs8s6"] Feb 19 00:36:07 crc kubenswrapper[5108]: I0219 00:36:07.860702 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782e3f41-8f60-44c0-80b1-bb38f5fdee23" path="/var/lib/kubelet/pods/782e3f41-8f60-44c0-80b1-bb38f5fdee23/volumes" Feb 19 00:36:10 crc kubenswrapper[5108]: I0219 00:36:10.537558 5108 scope.go:117] "RemoveContainer" containerID="66f642a0f642401e41f33077ff09bd4fde158cf004982d956101dd1026c22d23" Feb 19 00:36:10 crc kubenswrapper[5108]: I0219 00:36:10.848456 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:36:10 crc kubenswrapper[5108]: E0219 00:36:10.848730 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:36:16 crc kubenswrapper[5108]: I0219 00:36:16.377461 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-dw958_21868270-1946-4c6b-9aec-fac51ff7301b/prometheus-webhook-snmp/0.log" Feb 19 00:36:22 crc kubenswrapper[5108]: I0219 00:36:22.169363 5108 generic.go:358] "Generic (PLEG): container finished" podID="e5406e9f-1fb2-4a07-9a34-411879196c27" containerID="5e918c963509cbb407076ad77f3bb14b1495588e11c1155e0333cbec991db72d" exitCode=0 Feb 19 00:36:22 crc kubenswrapper[5108]: I0219 00:36:22.169448 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" event={"ID":"e5406e9f-1fb2-4a07-9a34-411879196c27","Type":"ContainerDied","Data":"5e918c963509cbb407076ad77f3bb14b1495588e11c1155e0333cbec991db72d"} Feb 19 00:36:22 crc kubenswrapper[5108]: I0219 00:36:22.170165 5108 scope.go:117] "RemoveContainer" containerID="5e918c963509cbb407076ad77f3bb14b1495588e11c1155e0333cbec991db72d" Feb 19 00:36:25 crc kubenswrapper[5108]: I0219 00:36:25.848826 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:36:25 crc kubenswrapper[5108]: E0219 00:36:25.849347 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:36:26 crc kubenswrapper[5108]: I0219 00:36:26.197441 5108 generic.go:358] "Generic (PLEG): container finished" podID="e5406e9f-1fb2-4a07-9a34-411879196c27" containerID="505071238ceb7628f00e23fcc0642f8dbe428f046bb17135af1f86d1161290e7" exitCode=0 Feb 19 00:36:26 crc kubenswrapper[5108]: I0219 00:36:26.197517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" event={"ID":"e5406e9f-1fb2-4a07-9a34-411879196c27","Type":"ContainerDied","Data":"505071238ceb7628f00e23fcc0642f8dbe428f046bb17135af1f86d1161290e7"} Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.497719 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.539192 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-entrypoint-script\") pod \"e5406e9f-1fb2-4a07-9a34-411879196c27\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.539316 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-sensubility-config\") pod \"e5406e9f-1fb2-4a07-9a34-411879196c27\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.539355 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-healthcheck-log\") pod \"e5406e9f-1fb2-4a07-9a34-411879196c27\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.539386 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-entrypoint-script\") pod \"e5406e9f-1fb2-4a07-9a34-411879196c27\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.539436 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz2wf\" (UniqueName: \"kubernetes.io/projected/e5406e9f-1fb2-4a07-9a34-411879196c27-kube-api-access-rz2wf\") pod \"e5406e9f-1fb2-4a07-9a34-411879196c27\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.539452 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-publisher\") pod \"e5406e9f-1fb2-4a07-9a34-411879196c27\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.539475 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-config\") pod \"e5406e9f-1fb2-4a07-9a34-411879196c27\" (UID: \"e5406e9f-1fb2-4a07-9a34-411879196c27\") " Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.546840 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5406e9f-1fb2-4a07-9a34-411879196c27-kube-api-access-rz2wf" (OuterVolumeSpecName: "kube-api-access-rz2wf") pod "e5406e9f-1fb2-4a07-9a34-411879196c27" (UID: "e5406e9f-1fb2-4a07-9a34-411879196c27"). InnerVolumeSpecName "kube-api-access-rz2wf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.558103 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "e5406e9f-1fb2-4a07-9a34-411879196c27" (UID: "e5406e9f-1fb2-4a07-9a34-411879196c27"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.558816 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "e5406e9f-1fb2-4a07-9a34-411879196c27" (UID: "e5406e9f-1fb2-4a07-9a34-411879196c27"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.559241 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "e5406e9f-1fb2-4a07-9a34-411879196c27" (UID: "e5406e9f-1fb2-4a07-9a34-411879196c27"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.562152 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "e5406e9f-1fb2-4a07-9a34-411879196c27" (UID: "e5406e9f-1fb2-4a07-9a34-411879196c27"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.564572 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "e5406e9f-1fb2-4a07-9a34-411879196c27" (UID: "e5406e9f-1fb2-4a07-9a34-411879196c27"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.576202 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "e5406e9f-1fb2-4a07-9a34-411879196c27" (UID: "e5406e9f-1fb2-4a07-9a34-411879196c27"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.641253 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.641290 5108 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.641301 5108 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.641311 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.641321 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rz2wf\" (UniqueName: \"kubernetes.io/projected/e5406e9f-1fb2-4a07-9a34-411879196c27-kube-api-access-rz2wf\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.641330 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:27 crc kubenswrapper[5108]: I0219 00:36:27.641341 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e5406e9f-1fb2-4a07-9a34-411879196c27-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:28 crc kubenswrapper[5108]: I0219 00:36:28.216035 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" Feb 19 00:36:28 crc kubenswrapper[5108]: I0219 00:36:28.216103 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vzc7m" event={"ID":"e5406e9f-1fb2-4a07-9a34-411879196c27","Type":"ContainerDied","Data":"fcadbda2d4e16900773d918709cdf7febbb51b2539bef4f9895ccdefeb9c3da8"} Feb 19 00:36:28 crc kubenswrapper[5108]: I0219 00:36:28.216158 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcadbda2d4e16900773d918709cdf7febbb51b2539bef4f9895ccdefeb9c3da8" Feb 19 00:36:29 crc kubenswrapper[5108]: I0219 00:36:29.378521 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-vzc7m_e5406e9f-1fb2-4a07-9a34-411879196c27/smoketest-collectd/0.log" Feb 19 00:36:29 crc kubenswrapper[5108]: I0219 00:36:29.645644 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-vzc7m_e5406e9f-1fb2-4a07-9a34-411879196c27/smoketest-ceilometer/0.log" Feb 19 00:36:29 crc kubenswrapper[5108]: I0219 00:36:29.931642 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-zczjg_def034e8-a7cb-408e-bda8-63097924e980/default-interconnect/0.log" Feb 19 00:36:30 crc kubenswrapper[5108]: I0219 00:36:30.193860 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7_c1b23465-43aa-4a3e-9617-5137e887360c/bridge/2.log" Feb 19 00:36:30 crc kubenswrapper[5108]: I0219 00:36:30.464811 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-mvmb7_c1b23465-43aa-4a3e-9617-5137e887360c/sg-core/0.log" Feb 19 00:36:30 crc kubenswrapper[5108]: I0219 00:36:30.682075 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-769879f664-h6jb4_84c95b74-339a-4dce-9f5d-0f35cb34ed71/bridge/2.log" Feb 19 00:36:30 crc kubenswrapper[5108]: I0219 00:36:30.971043 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-769879f664-h6jb4_84c95b74-339a-4dce-9f5d-0f35cb34ed71/sg-core/0.log" Feb 19 00:36:31 crc kubenswrapper[5108]: I0219 00:36:31.210804 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9_bf32a308-7483-43c7-80ec-21496776f93c/bridge/2.log" Feb 19 00:36:31 crc kubenswrapper[5108]: I0219 00:36:31.508677 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-6vkt9_bf32a308-7483-43c7-80ec-21496776f93c/sg-core/0.log" Feb 19 00:36:31 crc kubenswrapper[5108]: I0219 00:36:31.764115 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p_cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d/bridge/2.log" Feb 19 00:36:32 crc kubenswrapper[5108]: I0219 00:36:32.079864 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7477f9f55b-gbj6p_cfeebbbb-abf4-4e8a-8440-88d3eb5e2c8d/sg-core/0.log" Feb 19 00:36:32 crc kubenswrapper[5108]: I0219 00:36:32.357172 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr_448a5226-c34a-469e-bc72-79158e2b2c92/bridge/2.log" Feb 19 00:36:32 crc kubenswrapper[5108]: I0219 00:36:32.646197 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-tlpsr_448a5226-c34a-469e-bc72-79158e2b2c92/sg-core/0.log" Feb 19 00:36:36 crc kubenswrapper[5108]: I0219 00:36:36.478418 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-784ccd9b9c-pdw7k_39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a/operator/0.log" Feb 19 00:36:36 crc kubenswrapper[5108]: I0219 00:36:36.728236 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_3323d343-e59b-4ad7-a4bc-8ccedb940dee/prometheus/0.log" Feb 19 00:36:37 crc kubenswrapper[5108]: I0219 00:36:37.035365 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_651a531d-5946-47ac-95dc-3ad3f9f3b459/elasticsearch/0.log" Feb 19 00:36:37 crc kubenswrapper[5108]: I0219 00:36:37.233387 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-dw958_21868270-1946-4c6b-9aec-fac51ff7301b/prometheus-webhook-snmp/0.log" Feb 19 00:36:37 crc kubenswrapper[5108]: I0219 00:36:37.420780 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_edf4dc5b-ac62-4280-8090-05fc1d198800/alertmanager/0.log" Feb 19 00:36:37 crc kubenswrapper[5108]: I0219 00:36:37.848382 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:36:37 crc kubenswrapper[5108]: E0219 00:36:37.848858 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:36:52 crc kubenswrapper[5108]: I0219 00:36:52.469060 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-685f4dcc89-zhvhp_dcd58415-c463-451c-b96f-d49dadf7fd54/operator/0.log" Feb 19 00:36:52 crc kubenswrapper[5108]: I0219 00:36:52.847693 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:36:52 crc kubenswrapper[5108]: E0219 00:36:52.848182 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:36:56 crc kubenswrapper[5108]: I0219 00:36:56.056792 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-784ccd9b9c-pdw7k_39dc5a93-5fd4-4eb5-b298-2039cf1d7b2a/operator/0.log" Feb 19 00:36:56 crc kubenswrapper[5108]: I0219 00:36:56.314220 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_3a3f47d4-bad6-4747-922f-0df47e8fa0c6/qdr/0.log" Feb 19 00:36:58 crc kubenswrapper[5108]: I0219 00:36:58.759219 5108 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded" start-of-body= Feb 19 00:36:58 crc kubenswrapper[5108]: I0219 00:36:58.759877 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="0b638b8f4bb0070e40528db779baf6a2" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded" Feb 19 00:36:59 crc kubenswrapper[5108]: I0219 00:36:59.730017 5108 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-w7rrn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:37:00 crc kubenswrapper[5108]: I0219 00:37:00.002632 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w7rrn" podUID="5af44a88-046f-4a49-aa06-a2cdf10eb333" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 00:37:03 crc kubenswrapper[5108]: I0219 00:37:03.849282 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:37:03 crc kubenswrapper[5108]: E0219 00:37:03.850334 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:37:14 crc kubenswrapper[5108]: I0219 00:37:14.847813 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:37:14 crc kubenswrapper[5108]: E0219 00:37:14.848891 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:37:26 crc kubenswrapper[5108]: I0219 00:37:26.848378 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:37:26 crc kubenswrapper[5108]: E0219 00:37:26.849250 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.194825 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdkmb"] Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.196583 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5406e9f-1fb2-4a07-9a34-411879196c27" containerName="smoketest-collectd" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.196611 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5406e9f-1fb2-4a07-9a34-411879196c27" containerName="smoketest-collectd" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.196628 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01f2fdf3-00d2-4230-8643-56af472eab11" containerName="oc" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.196666 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f2fdf3-00d2-4230-8643-56af472eab11" containerName="oc" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.196708 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5406e9f-1fb2-4a07-9a34-411879196c27" containerName="smoketest-ceilometer" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.196748 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5406e9f-1fb2-4a07-9a34-411879196c27" containerName="smoketest-ceilometer" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.197106 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e5406e9f-1fb2-4a07-9a34-411879196c27" containerName="smoketest-ceilometer" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.197157 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="01f2fdf3-00d2-4230-8643-56af472eab11" containerName="oc" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.197181 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e5406e9f-1fb2-4a07-9a34-411879196c27" containerName="smoketest-collectd" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.215246 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdkmb"] Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.215489 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.316779 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kpc5\" (UniqueName: \"kubernetes.io/projected/3563ce67-bb19-4f1f-969f-c2eb84f4b812-kube-api-access-5kpc5\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.316915 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-utilities\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.316977 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-catalog-content\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.419108 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5kpc5\" (UniqueName: \"kubernetes.io/projected/3563ce67-bb19-4f1f-969f-c2eb84f4b812-kube-api-access-5kpc5\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.419736 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-utilities\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.420563 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-utilities\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.420701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-catalog-content\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.421244 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-catalog-content\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.455422 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kpc5\" (UniqueName: \"kubernetes.io/projected/3563ce67-bb19-4f1f-969f-c2eb84f4b812-kube-api-access-5kpc5\") pod \"certified-operators-cdkmb\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:27 crc kubenswrapper[5108]: I0219 00:37:27.540805 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:28 crc kubenswrapper[5108]: I0219 00:37:28.059258 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdkmb"] Feb 19 00:37:28 crc kubenswrapper[5108]: I0219 00:37:28.063812 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:37:28 crc kubenswrapper[5108]: I0219 00:37:28.767042 5108 generic.go:358] "Generic (PLEG): container finished" podID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerID="79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d" exitCode=0 Feb 19 00:37:28 crc kubenswrapper[5108]: I0219 00:37:28.767344 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdkmb" event={"ID":"3563ce67-bb19-4f1f-969f-c2eb84f4b812","Type":"ContainerDied","Data":"79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d"} Feb 19 00:37:28 crc kubenswrapper[5108]: I0219 00:37:28.767373 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdkmb" event={"ID":"3563ce67-bb19-4f1f-969f-c2eb84f4b812","Type":"ContainerStarted","Data":"d6c7f07d1a7b8fb88646dad2834de4a28cab3887cd7c36269bc6853fe3e1c214"} Feb 19 00:37:30 crc kubenswrapper[5108]: I0219 00:37:30.791917 5108 generic.go:358] "Generic (PLEG): container finished" podID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerID="a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315" exitCode=0 Feb 19 00:37:30 crc kubenswrapper[5108]: I0219 00:37:30.792056 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdkmb" event={"ID":"3563ce67-bb19-4f1f-969f-c2eb84f4b812","Type":"ContainerDied","Data":"a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315"} Feb 19 00:37:31 crc kubenswrapper[5108]: I0219 00:37:31.807134 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdkmb" event={"ID":"3563ce67-bb19-4f1f-969f-c2eb84f4b812","Type":"ContainerStarted","Data":"6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6"} Feb 19 00:37:31 crc kubenswrapper[5108]: I0219 00:37:31.834754 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdkmb" podStartSLOduration=3.911166098 podStartE2EDuration="4.834725854s" podCreationTimestamp="2026-02-19 00:37:27 +0000 UTC" firstStartedPulling="2026-02-19 00:37:28.76880416 +0000 UTC m=+1707.735450508" lastFinishedPulling="2026-02-19 00:37:29.692363916 +0000 UTC m=+1708.659010264" observedRunningTime="2026-02-19 00:37:31.830049965 +0000 UTC m=+1710.796696353" watchObservedRunningTime="2026-02-19 00:37:31.834725854 +0000 UTC m=+1710.801372192" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.294396 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jtkpm/must-gather-dlbjv"] Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.308922 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.313288 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-jtkpm\"/\"default-dockercfg-rq9tw\"" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.313467 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-jtkpm\"/\"kube-root-ca.crt\"" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.313917 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-jtkpm\"/\"openshift-service-ca.crt\"" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.319406 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jtkpm/must-gather-dlbjv"] Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.407442 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf8qn\" (UniqueName: \"kubernetes.io/projected/f0bfc62a-5e28-491b-9599-9fbd7ed02100-kube-api-access-pf8qn\") pod \"must-gather-dlbjv\" (UID: \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\") " pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.407601 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f0bfc62a-5e28-491b-9599-9fbd7ed02100-must-gather-output\") pod \"must-gather-dlbjv\" (UID: \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\") " pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.508766 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pf8qn\" (UniqueName: \"kubernetes.io/projected/f0bfc62a-5e28-491b-9599-9fbd7ed02100-kube-api-access-pf8qn\") pod \"must-gather-dlbjv\" (UID: \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\") " pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.508883 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f0bfc62a-5e28-491b-9599-9fbd7ed02100-must-gather-output\") pod \"must-gather-dlbjv\" (UID: \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\") " pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.509323 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f0bfc62a-5e28-491b-9599-9fbd7ed02100-must-gather-output\") pod \"must-gather-dlbjv\" (UID: \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\") " pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.527632 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf8qn\" (UniqueName: \"kubernetes.io/projected/f0bfc62a-5e28-491b-9599-9fbd7ed02100-kube-api-access-pf8qn\") pod \"must-gather-dlbjv\" (UID: \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\") " pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.626514 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:37:32 crc kubenswrapper[5108]: I0219 00:37:32.840266 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jtkpm/must-gather-dlbjv"] Feb 19 00:37:32 crc kubenswrapper[5108]: W0219 00:37:32.851597 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0bfc62a_5e28_491b_9599_9fbd7ed02100.slice/crio-9d9c83c62beb911ca561eb76cbe1248710d0c22370d0547b42b6970160e29d3a WatchSource:0}: Error finding container 9d9c83c62beb911ca561eb76cbe1248710d0c22370d0547b42b6970160e29d3a: Status 404 returned error can't find the container with id 9d9c83c62beb911ca561eb76cbe1248710d0c22370d0547b42b6970160e29d3a Feb 19 00:37:33 crc kubenswrapper[5108]: I0219 00:37:33.832229 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" event={"ID":"f0bfc62a-5e28-491b-9599-9fbd7ed02100","Type":"ContainerStarted","Data":"9d9c83c62beb911ca561eb76cbe1248710d0c22370d0547b42b6970160e29d3a"} Feb 19 00:37:37 crc kubenswrapper[5108]: I0219 00:37:37.541383 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:37 crc kubenswrapper[5108]: I0219 00:37:37.543760 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:37 crc kubenswrapper[5108]: I0219 00:37:37.604157 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:37 crc kubenswrapper[5108]: I0219 00:37:37.925643 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:37 crc kubenswrapper[5108]: I0219 00:37:37.989322 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdkmb"] Feb 19 00:37:39 crc kubenswrapper[5108]: I0219 00:37:39.878529 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" event={"ID":"f0bfc62a-5e28-491b-9599-9fbd7ed02100","Type":"ContainerStarted","Data":"8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760"} Feb 19 00:37:39 crc kubenswrapper[5108]: I0219 00:37:39.879091 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdkmb" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerName="registry-server" containerID="cri-o://6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6" gracePeriod=2 Feb 19 00:37:39 crc kubenswrapper[5108]: I0219 00:37:39.879157 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" event={"ID":"f0bfc62a-5e28-491b-9599-9fbd7ed02100","Type":"ContainerStarted","Data":"b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7"} Feb 19 00:37:39 crc kubenswrapper[5108]: I0219 00:37:39.905043 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" podStartSLOduration=1.8127035569999999 podStartE2EDuration="7.905026386s" podCreationTimestamp="2026-02-19 00:37:32 +0000 UTC" firstStartedPulling="2026-02-19 00:37:32.862231806 +0000 UTC m=+1711.828878114" lastFinishedPulling="2026-02-19 00:37:38.954554635 +0000 UTC m=+1717.921200943" observedRunningTime="2026-02-19 00:37:39.90077849 +0000 UTC m=+1718.867424808" watchObservedRunningTime="2026-02-19 00:37:39.905026386 +0000 UTC m=+1718.871672694" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.255662 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.347044 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-utilities\") pod \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.347131 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kpc5\" (UniqueName: \"kubernetes.io/projected/3563ce67-bb19-4f1f-969f-c2eb84f4b812-kube-api-access-5kpc5\") pod \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.347238 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-catalog-content\") pod \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\" (UID: \"3563ce67-bb19-4f1f-969f-c2eb84f4b812\") " Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.348462 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-utilities" (OuterVolumeSpecName: "utilities") pod "3563ce67-bb19-4f1f-969f-c2eb84f4b812" (UID: "3563ce67-bb19-4f1f-969f-c2eb84f4b812"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.367104 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3563ce67-bb19-4f1f-969f-c2eb84f4b812-kube-api-access-5kpc5" (OuterVolumeSpecName: "kube-api-access-5kpc5") pod "3563ce67-bb19-4f1f-969f-c2eb84f4b812" (UID: "3563ce67-bb19-4f1f-969f-c2eb84f4b812"). InnerVolumeSpecName "kube-api-access-5kpc5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.449190 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.449253 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5kpc5\" (UniqueName: \"kubernetes.io/projected/3563ce67-bb19-4f1f-969f-c2eb84f4b812-kube-api-access-5kpc5\") on node \"crc\" DevicePath \"\"" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.787520 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3563ce67-bb19-4f1f-969f-c2eb84f4b812" (UID: "3563ce67-bb19-4f1f-969f-c2eb84f4b812"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.854272 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3563ce67-bb19-4f1f-969f-c2eb84f4b812-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.893972 5108 generic.go:358] "Generic (PLEG): container finished" podID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerID="6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6" exitCode=0 Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.894139 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdkmb" event={"ID":"3563ce67-bb19-4f1f-969f-c2eb84f4b812","Type":"ContainerDied","Data":"6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6"} Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.894176 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdkmb" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.894210 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdkmb" event={"ID":"3563ce67-bb19-4f1f-969f-c2eb84f4b812","Type":"ContainerDied","Data":"d6c7f07d1a7b8fb88646dad2834de4a28cab3887cd7c36269bc6853fe3e1c214"} Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.894237 5108 scope.go:117] "RemoveContainer" containerID="6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.936911 5108 scope.go:117] "RemoveContainer" containerID="a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315" Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.942592 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdkmb"] Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.963591 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdkmb"] Feb 19 00:37:40 crc kubenswrapper[5108]: I0219 00:37:40.974654 5108 scope.go:117] "RemoveContainer" containerID="79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d" Feb 19 00:37:41 crc kubenswrapper[5108]: I0219 00:37:41.008645 5108 scope.go:117] "RemoveContainer" containerID="6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6" Feb 19 00:37:41 crc kubenswrapper[5108]: E0219 00:37:41.009156 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6\": container with ID starting with 6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6 not found: ID does not exist" containerID="6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6" Feb 19 00:37:41 crc kubenswrapper[5108]: I0219 00:37:41.009193 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6"} err="failed to get container status \"6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6\": rpc error: code = NotFound desc = could not find container \"6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6\": container with ID starting with 6fb8dfb116ca09391c375ae795411753c0cf169a066659679cfa259519d7ada6 not found: ID does not exist" Feb 19 00:37:41 crc kubenswrapper[5108]: I0219 00:37:41.009219 5108 scope.go:117] "RemoveContainer" containerID="a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315" Feb 19 00:37:41 crc kubenswrapper[5108]: E0219 00:37:41.009775 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315\": container with ID starting with a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315 not found: ID does not exist" containerID="a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315" Feb 19 00:37:41 crc kubenswrapper[5108]: I0219 00:37:41.009817 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315"} err="failed to get container status \"a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315\": rpc error: code = NotFound desc = could not find container \"a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315\": container with ID starting with a3b8a2146a8e4ef7674e8f266278b0237cb8ee0b3037be39148e7093049e8315 not found: ID does not exist" Feb 19 00:37:41 crc kubenswrapper[5108]: I0219 00:37:41.009850 5108 scope.go:117] "RemoveContainer" containerID="79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d" Feb 19 00:37:41 crc kubenswrapper[5108]: E0219 00:37:41.010190 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d\": container with ID starting with 79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d not found: ID does not exist" containerID="79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d" Feb 19 00:37:41 crc kubenswrapper[5108]: I0219 00:37:41.010218 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d"} err="failed to get container status \"79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d\": rpc error: code = NotFound desc = could not find container \"79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d\": container with ID starting with 79e3ce2bef3275d2a28d44fb8a257c2f6a0e5d106f7d8b450248fcf0c366d23d not found: ID does not exist" Feb 19 00:37:41 crc kubenswrapper[5108]: I0219 00:37:41.855116 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:37:41 crc kubenswrapper[5108]: E0219 00:37:41.855660 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:37:41 crc kubenswrapper[5108]: I0219 00:37:41.858446 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" path="/var/lib/kubelet/pods/3563ce67-bb19-4f1f-969f-c2eb84f4b812/volumes" Feb 19 00:37:53 crc kubenswrapper[5108]: I0219 00:37:53.847807 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:37:53 crc kubenswrapper[5108]: E0219 00:37:53.848848 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.140100 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524358-s6vvf"] Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.141517 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerName="registry-server" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.141533 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerName="registry-server" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.141549 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerName="extract-utilities" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.141557 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerName="extract-utilities" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.141598 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerName="extract-content" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.141605 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerName="extract-content" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.141740 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3563ce67-bb19-4f1f-969f-c2eb84f4b812" containerName="registry-server" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.152775 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524358-s6vvf"] Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.152953 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.155723 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.155736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.156768 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.265490 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx8t5\" (UniqueName: \"kubernetes.io/projected/c33fd46e-730f-4f8c-8bed-de7ab778e10c-kube-api-access-lx8t5\") pod \"auto-csr-approver-29524358-s6vvf\" (UID: \"c33fd46e-730f-4f8c-8bed-de7ab778e10c\") " pod="openshift-infra/auto-csr-approver-29524358-s6vvf" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.368474 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lx8t5\" (UniqueName: \"kubernetes.io/projected/c33fd46e-730f-4f8c-8bed-de7ab778e10c-kube-api-access-lx8t5\") pod \"auto-csr-approver-29524358-s6vvf\" (UID: \"c33fd46e-730f-4f8c-8bed-de7ab778e10c\") " pod="openshift-infra/auto-csr-approver-29524358-s6vvf" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.390474 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx8t5\" (UniqueName: \"kubernetes.io/projected/c33fd46e-730f-4f8c-8bed-de7ab778e10c-kube-api-access-lx8t5\") pod \"auto-csr-approver-29524358-s6vvf\" (UID: \"c33fd46e-730f-4f8c-8bed-de7ab778e10c\") " pod="openshift-infra/auto-csr-approver-29524358-s6vvf" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.487042 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" Feb 19 00:38:00 crc kubenswrapper[5108]: I0219 00:38:00.922162 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524358-s6vvf"] Feb 19 00:38:01 crc kubenswrapper[5108]: I0219 00:38:01.115226 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" event={"ID":"c33fd46e-730f-4f8c-8bed-de7ab778e10c","Type":"ContainerStarted","Data":"8eacd6f8fc8503d8d8ad62100ec1259dc04a9c20bffd85fef4614f44f407f12f"} Feb 19 00:38:02 crc kubenswrapper[5108]: I0219 00:38:02.126988 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" event={"ID":"c33fd46e-730f-4f8c-8bed-de7ab778e10c","Type":"ContainerStarted","Data":"9b1753a0dd40adb51c4b5171ebd7359a1113a16afa40a31e7cf0919730580e46"} Feb 19 00:38:02 crc kubenswrapper[5108]: I0219 00:38:02.142085 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" podStartSLOduration=1.319376808 podStartE2EDuration="2.142068553s" podCreationTimestamp="2026-02-19 00:38:00 +0000 UTC" firstStartedPulling="2026-02-19 00:38:00.938310202 +0000 UTC m=+1739.904956510" lastFinishedPulling="2026-02-19 00:38:01.761001907 +0000 UTC m=+1740.727648255" observedRunningTime="2026-02-19 00:38:02.141698193 +0000 UTC m=+1741.108344511" watchObservedRunningTime="2026-02-19 00:38:02.142068553 +0000 UTC m=+1741.108714861" Feb 19 00:38:03 crc kubenswrapper[5108]: I0219 00:38:03.136268 5108 generic.go:358] "Generic (PLEG): container finished" podID="c33fd46e-730f-4f8c-8bed-de7ab778e10c" containerID="9b1753a0dd40adb51c4b5171ebd7359a1113a16afa40a31e7cf0919730580e46" exitCode=0 Feb 19 00:38:03 crc kubenswrapper[5108]: I0219 00:38:03.136556 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" event={"ID":"c33fd46e-730f-4f8c-8bed-de7ab778e10c","Type":"ContainerDied","Data":"9b1753a0dd40adb51c4b5171ebd7359a1113a16afa40a31e7cf0919730580e46"} Feb 19 00:38:04 crc kubenswrapper[5108]: I0219 00:38:04.439616 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" Feb 19 00:38:04 crc kubenswrapper[5108]: I0219 00:38:04.633460 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx8t5\" (UniqueName: \"kubernetes.io/projected/c33fd46e-730f-4f8c-8bed-de7ab778e10c-kube-api-access-lx8t5\") pod \"c33fd46e-730f-4f8c-8bed-de7ab778e10c\" (UID: \"c33fd46e-730f-4f8c-8bed-de7ab778e10c\") " Feb 19 00:38:04 crc kubenswrapper[5108]: I0219 00:38:04.640066 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33fd46e-730f-4f8c-8bed-de7ab778e10c-kube-api-access-lx8t5" (OuterVolumeSpecName: "kube-api-access-lx8t5") pod "c33fd46e-730f-4f8c-8bed-de7ab778e10c" (UID: "c33fd46e-730f-4f8c-8bed-de7ab778e10c"). InnerVolumeSpecName "kube-api-access-lx8t5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:38:04 crc kubenswrapper[5108]: I0219 00:38:04.735619 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lx8t5\" (UniqueName: \"kubernetes.io/projected/c33fd46e-730f-4f8c-8bed-de7ab778e10c-kube-api-access-lx8t5\") on node \"crc\" DevicePath \"\"" Feb 19 00:38:04 crc kubenswrapper[5108]: I0219 00:38:04.848800 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:38:04 crc kubenswrapper[5108]: E0219 00:38:04.849500 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:38:04 crc kubenswrapper[5108]: I0219 00:38:04.941382 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524352-ghgqx"] Feb 19 00:38:04 crc kubenswrapper[5108]: I0219 00:38:04.946833 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524352-ghgqx"] Feb 19 00:38:05 crc kubenswrapper[5108]: I0219 00:38:05.165521 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" Feb 19 00:38:05 crc kubenswrapper[5108]: I0219 00:38:05.165890 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524358-s6vvf" event={"ID":"c33fd46e-730f-4f8c-8bed-de7ab778e10c","Type":"ContainerDied","Data":"8eacd6f8fc8503d8d8ad62100ec1259dc04a9c20bffd85fef4614f44f407f12f"} Feb 19 00:38:05 crc kubenswrapper[5108]: I0219 00:38:05.166114 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eacd6f8fc8503d8d8ad62100ec1259dc04a9c20bffd85fef4614f44f407f12f" Feb 19 00:38:05 crc kubenswrapper[5108]: I0219 00:38:05.864732 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cb93e4-af2a-4449-8999-ecf6da709e25" path="/var/lib/kubelet/pods/c7cb93e4-af2a-4449-8999-ecf6da709e25/volumes" Feb 19 00:38:10 crc kubenswrapper[5108]: I0219 00:38:10.686468 5108 scope.go:117] "RemoveContainer" containerID="5320edacf2395e374027032f6804668ed1ed9ac5cfd6c437a2e21d95b8711c9d" Feb 19 00:38:10 crc kubenswrapper[5108]: I0219 00:38:10.778578 5108 scope.go:117] "RemoveContainer" containerID="2b2b51a9a04733ead42d886b700eee4dae45306940a7b4f2a58cc76ce529df06" Feb 19 00:38:10 crc kubenswrapper[5108]: I0219 00:38:10.868613 5108 scope.go:117] "RemoveContainer" containerID="923df79374461ac9706343f2dbc43239e0973ab29a3ca53c650b2466052edfe7" Feb 19 00:38:10 crc kubenswrapper[5108]: I0219 00:38:10.943142 5108 scope.go:117] "RemoveContainer" containerID="97efbc73aeada7b14dad657560925b87f536ae72f0d7f237a14a61d4d08b85be" Feb 19 00:38:11 crc kubenswrapper[5108]: I0219 00:38:11.013609 5108 scope.go:117] "RemoveContainer" containerID="8526ebedc4d904e0f590dc14b68c6af204033a4a895f2a5589d7d438e457f723" Feb 19 00:38:16 crc kubenswrapper[5108]: I0219 00:38:16.848269 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:38:16 crc kubenswrapper[5108]: E0219 00:38:16.849217 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.498267 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-jjv84"] Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.503479 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c33fd46e-730f-4f8c-8bed-de7ab778e10c" containerName="oc" Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.503535 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33fd46e-730f-4f8c-8bed-de7ab778e10c" containerName="oc" Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.503989 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c33fd46e-730f-4f8c-8bed-de7ab778e10c" containerName="oc" Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.520292 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jjv84"] Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.520457 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.584568 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5k4\" (UniqueName: \"kubernetes.io/projected/981ea5b5-02a7-44ea-a25d-2982bfdf8b30-kube-api-access-6c5k4\") pod \"infrawatch-operators-jjv84\" (UID: \"981ea5b5-02a7-44ea-a25d-2982bfdf8b30\") " pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.686629 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5k4\" (UniqueName: \"kubernetes.io/projected/981ea5b5-02a7-44ea-a25d-2982bfdf8b30-kube-api-access-6c5k4\") pod \"infrawatch-operators-jjv84\" (UID: \"981ea5b5-02a7-44ea-a25d-2982bfdf8b30\") " pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.710761 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5k4\" (UniqueName: \"kubernetes.io/projected/981ea5b5-02a7-44ea-a25d-2982bfdf8b30-kube-api-access-6c5k4\") pod \"infrawatch-operators-jjv84\" (UID: \"981ea5b5-02a7-44ea-a25d-2982bfdf8b30\") " pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:24 crc kubenswrapper[5108]: I0219 00:38:24.846366 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:25 crc kubenswrapper[5108]: I0219 00:38:25.080266 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jjv84"] Feb 19 00:38:25 crc kubenswrapper[5108]: W0219 00:38:25.084355 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod981ea5b5_02a7_44ea_a25d_2982bfdf8b30.slice/crio-152f99daf50dffabb6f9722d972af9b6509994b43d6c0b4d6224870384f81b6d WatchSource:0}: Error finding container 152f99daf50dffabb6f9722d972af9b6509994b43d6c0b4d6224870384f81b6d: Status 404 returned error can't find the container with id 152f99daf50dffabb6f9722d972af9b6509994b43d6c0b4d6224870384f81b6d Feb 19 00:38:25 crc kubenswrapper[5108]: I0219 00:38:25.315314 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jjv84" event={"ID":"981ea5b5-02a7-44ea-a25d-2982bfdf8b30","Type":"ContainerStarted","Data":"152f99daf50dffabb6f9722d972af9b6509994b43d6c0b4d6224870384f81b6d"} Feb 19 00:38:25 crc kubenswrapper[5108]: I0219 00:38:25.910424 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-2554r_98aac6ae-e129-4ce6-9b45-3eb23232be7d/control-plane-machine-set-operator/0.log" Feb 19 00:38:26 crc kubenswrapper[5108]: I0219 00:38:26.046714 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-dtlcj_45c6feda-c272-4a12-b1fb-ad25af916694/kube-rbac-proxy/0.log" Feb 19 00:38:26 crc kubenswrapper[5108]: I0219 00:38:26.099723 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-dtlcj_45c6feda-c272-4a12-b1fb-ad25af916694/machine-api-operator/0.log" Feb 19 00:38:26 crc kubenswrapper[5108]: I0219 00:38:26.323646 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jjv84" event={"ID":"981ea5b5-02a7-44ea-a25d-2982bfdf8b30","Type":"ContainerStarted","Data":"1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818"} Feb 19 00:38:26 crc kubenswrapper[5108]: I0219 00:38:26.342737 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-jjv84" podStartSLOduration=2.244817258 podStartE2EDuration="2.342713739s" podCreationTimestamp="2026-02-19 00:38:24 +0000 UTC" firstStartedPulling="2026-02-19 00:38:25.085959202 +0000 UTC m=+1764.052605510" lastFinishedPulling="2026-02-19 00:38:25.183855673 +0000 UTC m=+1764.150501991" observedRunningTime="2026-02-19 00:38:26.337480516 +0000 UTC m=+1765.304126844" watchObservedRunningTime="2026-02-19 00:38:26.342713739 +0000 UTC m=+1765.309360047" Feb 19 00:38:28 crc kubenswrapper[5108]: I0219 00:38:28.848570 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:38:28 crc kubenswrapper[5108]: E0219 00:38:28.849205 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:38:34 crc kubenswrapper[5108]: I0219 00:38:34.847512 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:34 crc kubenswrapper[5108]: I0219 00:38:34.848079 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:34 crc kubenswrapper[5108]: I0219 00:38:34.899027 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:35 crc kubenswrapper[5108]: I0219 00:38:35.429180 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:36 crc kubenswrapper[5108]: I0219 00:38:36.236733 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jjv84"] Feb 19 00:38:37 crc kubenswrapper[5108]: I0219 00:38:37.414243 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-jjv84" podUID="981ea5b5-02a7-44ea-a25d-2982bfdf8b30" containerName="registry-server" containerID="cri-o://1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818" gracePeriod=2 Feb 19 00:38:37 crc kubenswrapper[5108]: I0219 00:38:37.819766 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.006592 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c5k4\" (UniqueName: \"kubernetes.io/projected/981ea5b5-02a7-44ea-a25d-2982bfdf8b30-kube-api-access-6c5k4\") pod \"981ea5b5-02a7-44ea-a25d-2982bfdf8b30\" (UID: \"981ea5b5-02a7-44ea-a25d-2982bfdf8b30\") " Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.015544 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981ea5b5-02a7-44ea-a25d-2982bfdf8b30-kube-api-access-6c5k4" (OuterVolumeSpecName: "kube-api-access-6c5k4") pod "981ea5b5-02a7-44ea-a25d-2982bfdf8b30" (UID: "981ea5b5-02a7-44ea-a25d-2982bfdf8b30"). InnerVolumeSpecName "kube-api-access-6c5k4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.108828 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6c5k4\" (UniqueName: \"kubernetes.io/projected/981ea5b5-02a7-44ea-a25d-2982bfdf8b30-kube-api-access-6c5k4\") on node \"crc\" DevicePath \"\"" Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.436464 5108 generic.go:358] "Generic (PLEG): container finished" podID="981ea5b5-02a7-44ea-a25d-2982bfdf8b30" containerID="1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818" exitCode=0 Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.436568 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jjv84" event={"ID":"981ea5b5-02a7-44ea-a25d-2982bfdf8b30","Type":"ContainerDied","Data":"1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818"} Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.436641 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jjv84" event={"ID":"981ea5b5-02a7-44ea-a25d-2982bfdf8b30","Type":"ContainerDied","Data":"152f99daf50dffabb6f9722d972af9b6509994b43d6c0b4d6224870384f81b6d"} Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.436647 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jjv84" Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.436672 5108 scope.go:117] "RemoveContainer" containerID="1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818" Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.475566 5108 scope.go:117] "RemoveContainer" containerID="1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818" Feb 19 00:38:38 crc kubenswrapper[5108]: E0219 00:38:38.479066 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818\": container with ID starting with 1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818 not found: ID does not exist" containerID="1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818" Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.479142 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818"} err="failed to get container status \"1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818\": rpc error: code = NotFound desc = could not find container \"1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818\": container with ID starting with 1edf0b2a5d18da2123ea8fe58c75a2917aa1039bf259161212ebeaf5497b8818 not found: ID does not exist" Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.494488 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jjv84"] Feb 19 00:38:38 crc kubenswrapper[5108]: I0219 00:38:38.504565 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-jjv84"] Feb 19 00:38:39 crc kubenswrapper[5108]: I0219 00:38:39.684727 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-qsbwt_e96a0c11-ab9b-48a6-9a98-94a33b8b828d/cert-manager-controller/0.log" Feb 19 00:38:39 crc kubenswrapper[5108]: I0219 00:38:39.841512 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-lccmr_1810224b-992d-40ff-a9ed-d20d16b843e4/cert-manager-cainjector/0.log" Feb 19 00:38:39 crc kubenswrapper[5108]: I0219 00:38:39.855992 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981ea5b5-02a7-44ea-a25d-2982bfdf8b30" path="/var/lib/kubelet/pods/981ea5b5-02a7-44ea-a25d-2982bfdf8b30/volumes" Feb 19 00:38:39 crc kubenswrapper[5108]: I0219 00:38:39.898954 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-k2bvb_bdf69b0b-3608-4252-9290-a0e77f5c73ca/cert-manager-webhook/0.log" Feb 19 00:38:41 crc kubenswrapper[5108]: I0219 00:38:41.864714 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:38:41 crc kubenswrapper[5108]: E0219 00:38:41.865418 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:38:55 crc kubenswrapper[5108]: I0219 00:38:55.150616 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-hx8sv_387cf543-9cc1-4861-b4ce-68abdc01d808/prometheus-operator/0.log" Feb 19 00:38:55 crc kubenswrapper[5108]: I0219 00:38:55.216527 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh_266935b2-7e3e-4471-ab13-97b596e98f12/prometheus-operator-admission-webhook/0.log" Feb 19 00:38:55 crc kubenswrapper[5108]: I0219 00:38:55.305762 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr_0315a649-f003-4488-a10e-025063b858af/prometheus-operator-admission-webhook/0.log" Feb 19 00:38:55 crc kubenswrapper[5108]: I0219 00:38:55.389586 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-dxnlv_0ab73ba4-63c1-423b-9bc7-ecdec5a770b1/operator/0.log" Feb 19 00:38:55 crc kubenswrapper[5108]: I0219 00:38:55.512357 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-bft9p_41c947a0-c927-4923-a233-a42d1a8b1039/perses-operator/0.log" Feb 19 00:38:55 crc kubenswrapper[5108]: I0219 00:38:55.848705 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:38:55 crc kubenswrapper[5108]: E0219 00:38:55.849515 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:39:02 crc kubenswrapper[5108]: I0219 00:39:02.628171 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:39:02 crc kubenswrapper[5108]: I0219 00:39:02.633462 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:39:02 crc kubenswrapper[5108]: I0219 00:39:02.635668 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:39:02 crc kubenswrapper[5108]: I0219 00:39:02.640167 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:39:08 crc kubenswrapper[5108]: I0219 00:39:08.847764 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:39:08 crc kubenswrapper[5108]: E0219 00:39:08.848538 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:39:11 crc kubenswrapper[5108]: I0219 00:39:11.670017 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg_11b0ad91-9b7a-4520-8abb-9ca84c22c5cb/util/0.log" Feb 19 00:39:11 crc kubenswrapper[5108]: I0219 00:39:11.808778 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg_11b0ad91-9b7a-4520-8abb-9ca84c22c5cb/pull/0.log" Feb 19 00:39:11 crc kubenswrapper[5108]: I0219 00:39:11.833308 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg_11b0ad91-9b7a-4520-8abb-9ca84c22c5cb/pull/0.log" Feb 19 00:39:11 crc kubenswrapper[5108]: I0219 00:39:11.836323 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg_11b0ad91-9b7a-4520-8abb-9ca84c22c5cb/util/0.log" Feb 19 00:39:11 crc kubenswrapper[5108]: I0219 00:39:11.977272 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg_11b0ad91-9b7a-4520-8abb-9ca84c22c5cb/util/0.log" Feb 19 00:39:11 crc kubenswrapper[5108]: I0219 00:39:11.987911 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg_11b0ad91-9b7a-4520-8abb-9ca84c22c5cb/extract/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.025027 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f197djg_11b0ad91-9b7a-4520-8abb-9ca84c22c5cb/pull/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.149000 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5_0b613a11-75dd-4743-b254-1c46655902a5/util/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.344059 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5_0b613a11-75dd-4743-b254-1c46655902a5/util/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.351389 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5_0b613a11-75dd-4743-b254-1c46655902a5/pull/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.355045 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5_0b613a11-75dd-4743-b254-1c46655902a5/pull/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.496828 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5_0b613a11-75dd-4743-b254-1c46655902a5/util/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.524970 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5_0b613a11-75dd-4743-b254-1c46655902a5/pull/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.529264 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ff8gb5_0b613a11-75dd-4743-b254-1c46655902a5/extract/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.682748 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x_e8d4c5ea-879f-4722-bc3f-d57e6fc208e9/util/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.830358 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x_e8d4c5ea-879f-4722-bc3f-d57e6fc208e9/util/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.831803 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x_e8d4c5ea-879f-4722-bc3f-d57e6fc208e9/pull/0.log" Feb 19 00:39:12 crc kubenswrapper[5108]: I0219 00:39:12.853874 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x_e8d4c5ea-879f-4722-bc3f-d57e6fc208e9/pull/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.027661 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x_e8d4c5ea-879f-4722-bc3f-d57e6fc208e9/util/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.049769 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x_e8d4c5ea-879f-4722-bc3f-d57e6fc208e9/pull/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.066672 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nwx4x_e8d4c5ea-879f-4722-bc3f-d57e6fc208e9/extract/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.186287 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56_16b44f18-0a6f-4fc0-b923-f3bc5a596156/util/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.439331 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56_16b44f18-0a6f-4fc0-b923-f3bc5a596156/util/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.443980 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56_16b44f18-0a6f-4fc0-b923-f3bc5a596156/pull/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.446087 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56_16b44f18-0a6f-4fc0-b923-f3bc5a596156/pull/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.581009 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56_16b44f18-0a6f-4fc0-b923-f3bc5a596156/util/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.637643 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56_16b44f18-0a6f-4fc0-b923-f3bc5a596156/pull/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.671809 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084kh56_16b44f18-0a6f-4fc0-b923-f3bc5a596156/extract/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.775069 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lv87_2cabb708-2cc7-4505-9dae-0d78ce2ed6b0/extract-utilities/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.893357 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lv87_2cabb708-2cc7-4505-9dae-0d78ce2ed6b0/extract-utilities/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.912688 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lv87_2cabb708-2cc7-4505-9dae-0d78ce2ed6b0/extract-content/0.log" Feb 19 00:39:13 crc kubenswrapper[5108]: I0219 00:39:13.955448 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lv87_2cabb708-2cc7-4505-9dae-0d78ce2ed6b0/extract-content/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.049895 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lv87_2cabb708-2cc7-4505-9dae-0d78ce2ed6b0/extract-utilities/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.073011 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lv87_2cabb708-2cc7-4505-9dae-0d78ce2ed6b0/extract-content/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.193530 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wlkgp_ec9e03f3-e9a6-482d-a19b-87b2a240761e/extract-utilities/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.281473 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lv87_2cabb708-2cc7-4505-9dae-0d78ce2ed6b0/registry-server/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.341014 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wlkgp_ec9e03f3-e9a6-482d-a19b-87b2a240761e/extract-utilities/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.353708 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wlkgp_ec9e03f3-e9a6-482d-a19b-87b2a240761e/extract-content/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.354920 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wlkgp_ec9e03f3-e9a6-482d-a19b-87b2a240761e/extract-content/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.516727 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wlkgp_ec9e03f3-e9a6-482d-a19b-87b2a240761e/extract-content/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.527762 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wlkgp_ec9e03f3-e9a6-482d-a19b-87b2a240761e/extract-utilities/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.566989 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-7bgw9_48bda508-98fc-4c83-bbf1-98ad97774a97/marketplace-operator/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.707162 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49llx_caca46e8-3d11-46fa-9cdf-92e60dfca341/extract-utilities/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.959359 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wlkgp_ec9e03f3-e9a6-482d-a19b-87b2a240761e/registry-server/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.959513 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49llx_caca46e8-3d11-46fa-9cdf-92e60dfca341/extract-content/0.log" Feb 19 00:39:14 crc kubenswrapper[5108]: I0219 00:39:14.986746 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49llx_caca46e8-3d11-46fa-9cdf-92e60dfca341/extract-content/0.log" Feb 19 00:39:15 crc kubenswrapper[5108]: I0219 00:39:15.016416 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49llx_caca46e8-3d11-46fa-9cdf-92e60dfca341/extract-utilities/0.log" Feb 19 00:39:15 crc kubenswrapper[5108]: I0219 00:39:15.149570 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49llx_caca46e8-3d11-46fa-9cdf-92e60dfca341/extract-content/0.log" Feb 19 00:39:15 crc kubenswrapper[5108]: I0219 00:39:15.160467 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49llx_caca46e8-3d11-46fa-9cdf-92e60dfca341/extract-utilities/0.log" Feb 19 00:39:15 crc kubenswrapper[5108]: I0219 00:39:15.456294 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49llx_caca46e8-3d11-46fa-9cdf-92e60dfca341/registry-server/0.log" Feb 19 00:39:20 crc kubenswrapper[5108]: I0219 00:39:20.848755 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:39:20 crc kubenswrapper[5108]: E0219 00:39:20.852161 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:39:28 crc kubenswrapper[5108]: I0219 00:39:28.745481 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dbb775b84-bwfmh_266935b2-7e3e-4471-ab13-97b596e98f12/prometheus-operator-admission-webhook/0.log" Feb 19 00:39:28 crc kubenswrapper[5108]: I0219 00:39:28.751520 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-hx8sv_387cf543-9cc1-4861-b4ce-68abdc01d808/prometheus-operator/0.log" Feb 19 00:39:28 crc kubenswrapper[5108]: I0219 00:39:28.771465 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dbb775b84-ppkpr_0315a649-f003-4488-a10e-025063b858af/prometheus-operator-admission-webhook/0.log" Feb 19 00:39:28 crc kubenswrapper[5108]: I0219 00:39:28.879346 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-dxnlv_0ab73ba4-63c1-423b-9bc7-ecdec5a770b1/operator/0.log" Feb 19 00:39:28 crc kubenswrapper[5108]: I0219 00:39:28.916410 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-bft9p_41c947a0-c927-4923-a233-a42d1a8b1039/perses-operator/0.log" Feb 19 00:39:31 crc kubenswrapper[5108]: I0219 00:39:31.868180 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:39:31 crc kubenswrapper[5108]: E0219 00:39:31.869134 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:39:45 crc kubenswrapper[5108]: I0219 00:39:45.848919 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:39:45 crc kubenswrapper[5108]: E0219 00:39:45.850081 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.151661 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524360-78rr6"] Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.153709 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="981ea5b5-02a7-44ea-a25d-2982bfdf8b30" containerName="registry-server" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.153738 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="981ea5b5-02a7-44ea-a25d-2982bfdf8b30" containerName="registry-server" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.154053 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="981ea5b5-02a7-44ea-a25d-2982bfdf8b30" containerName="registry-server" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.163117 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524360-78rr6" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.166086 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524360-78rr6"] Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.167330 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.167572 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.167731 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.209089 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8z6\" (UniqueName: \"kubernetes.io/projected/1def353e-cbab-4162-987a-6a8444e12df1-kube-api-access-rj8z6\") pod \"auto-csr-approver-29524360-78rr6\" (UID: \"1def353e-cbab-4162-987a-6a8444e12df1\") " pod="openshift-infra/auto-csr-approver-29524360-78rr6" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.310922 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8z6\" (UniqueName: \"kubernetes.io/projected/1def353e-cbab-4162-987a-6a8444e12df1-kube-api-access-rj8z6\") pod \"auto-csr-approver-29524360-78rr6\" (UID: \"1def353e-cbab-4162-987a-6a8444e12df1\") " pod="openshift-infra/auto-csr-approver-29524360-78rr6" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.346097 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8z6\" (UniqueName: \"kubernetes.io/projected/1def353e-cbab-4162-987a-6a8444e12df1-kube-api-access-rj8z6\") pod \"auto-csr-approver-29524360-78rr6\" (UID: \"1def353e-cbab-4162-987a-6a8444e12df1\") " pod="openshift-infra/auto-csr-approver-29524360-78rr6" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.498541 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524360-78rr6" Feb 19 00:40:00 crc kubenswrapper[5108]: I0219 00:40:00.848062 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:40:00 crc kubenswrapper[5108]: E0219 00:40:00.848388 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5zp6_openshift-machine-config-operator(995cb3be-1541-4090-83fe-8bf1a8259f0d)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" Feb 19 00:40:01 crc kubenswrapper[5108]: I0219 00:40:01.052640 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524360-78rr6"] Feb 19 00:40:01 crc kubenswrapper[5108]: I0219 00:40:01.218559 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524360-78rr6" event={"ID":"1def353e-cbab-4162-987a-6a8444e12df1","Type":"ContainerStarted","Data":"fd8175fe9caea7ab9e4e44f0d9f9a170055a2c3ebba10265e61d8ba30b5f14b7"} Feb 19 00:40:03 crc kubenswrapper[5108]: I0219 00:40:03.237803 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524360-78rr6" event={"ID":"1def353e-cbab-4162-987a-6a8444e12df1","Type":"ContainerStarted","Data":"2fffdf91eb6412b5d87ef6464a93aaea58106f3afd9fccdb4e50d7fe7eff5c94"} Feb 19 00:40:03 crc kubenswrapper[5108]: I0219 00:40:03.256640 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524360-78rr6" podStartSLOduration=1.681856011 podStartE2EDuration="3.256618823s" podCreationTimestamp="2026-02-19 00:40:00 +0000 UTC" firstStartedPulling="2026-02-19 00:40:01.062082881 +0000 UTC m=+1860.028729199" lastFinishedPulling="2026-02-19 00:40:02.636845653 +0000 UTC m=+1861.603492011" observedRunningTime="2026-02-19 00:40:03.249786478 +0000 UTC m=+1862.216432796" watchObservedRunningTime="2026-02-19 00:40:03.256618823 +0000 UTC m=+1862.223265141" Feb 19 00:40:04 crc kubenswrapper[5108]: I0219 00:40:04.251038 5108 generic.go:358] "Generic (PLEG): container finished" podID="1def353e-cbab-4162-987a-6a8444e12df1" containerID="2fffdf91eb6412b5d87ef6464a93aaea58106f3afd9fccdb4e50d7fe7eff5c94" exitCode=0 Feb 19 00:40:04 crc kubenswrapper[5108]: I0219 00:40:04.251132 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524360-78rr6" event={"ID":"1def353e-cbab-4162-987a-6a8444e12df1","Type":"ContainerDied","Data":"2fffdf91eb6412b5d87ef6464a93aaea58106f3afd9fccdb4e50d7fe7eff5c94"} Feb 19 00:40:05 crc kubenswrapper[5108]: I0219 00:40:05.597225 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524360-78rr6" Feb 19 00:40:05 crc kubenswrapper[5108]: I0219 00:40:05.711040 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj8z6\" (UniqueName: \"kubernetes.io/projected/1def353e-cbab-4162-987a-6a8444e12df1-kube-api-access-rj8z6\") pod \"1def353e-cbab-4162-987a-6a8444e12df1\" (UID: \"1def353e-cbab-4162-987a-6a8444e12df1\") " Feb 19 00:40:05 crc kubenswrapper[5108]: I0219 00:40:05.722496 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1def353e-cbab-4162-987a-6a8444e12df1-kube-api-access-rj8z6" (OuterVolumeSpecName: "kube-api-access-rj8z6") pod "1def353e-cbab-4162-987a-6a8444e12df1" (UID: "1def353e-cbab-4162-987a-6a8444e12df1"). InnerVolumeSpecName "kube-api-access-rj8z6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:40:05 crc kubenswrapper[5108]: I0219 00:40:05.813581 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rj8z6\" (UniqueName: \"kubernetes.io/projected/1def353e-cbab-4162-987a-6a8444e12df1-kube-api-access-rj8z6\") on node \"crc\" DevicePath \"\"" Feb 19 00:40:06 crc kubenswrapper[5108]: I0219 00:40:06.283923 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524360-78rr6" Feb 19 00:40:06 crc kubenswrapper[5108]: I0219 00:40:06.284208 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524360-78rr6" event={"ID":"1def353e-cbab-4162-987a-6a8444e12df1","Type":"ContainerDied","Data":"fd8175fe9caea7ab9e4e44f0d9f9a170055a2c3ebba10265e61d8ba30b5f14b7"} Feb 19 00:40:06 crc kubenswrapper[5108]: I0219 00:40:06.284274 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8175fe9caea7ab9e4e44f0d9f9a170055a2c3ebba10265e61d8ba30b5f14b7" Feb 19 00:40:06 crc kubenswrapper[5108]: I0219 00:40:06.324226 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524354-2ck4f"] Feb 19 00:40:06 crc kubenswrapper[5108]: I0219 00:40:06.329900 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524354-2ck4f"] Feb 19 00:40:07 crc kubenswrapper[5108]: I0219 00:40:07.862975 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0252aa21-3d8f-424d-a3b4-7d323b1677de" path="/var/lib/kubelet/pods/0252aa21-3d8f-424d-a3b4-7d323b1677de/volumes" Feb 19 00:40:09 crc kubenswrapper[5108]: I0219 00:40:09.310173 5108 generic.go:358] "Generic (PLEG): container finished" podID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerID="b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7" exitCode=0 Feb 19 00:40:09 crc kubenswrapper[5108]: I0219 00:40:09.310278 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" event={"ID":"f0bfc62a-5e28-491b-9599-9fbd7ed02100","Type":"ContainerDied","Data":"b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7"} Feb 19 00:40:09 crc kubenswrapper[5108]: I0219 00:40:09.311128 5108 scope.go:117] "RemoveContainer" containerID="b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7" Feb 19 00:40:09 crc kubenswrapper[5108]: I0219 00:40:09.652472 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jtkpm_must-gather-dlbjv_f0bfc62a-5e28-491b-9599-9fbd7ed02100/gather/0.log" Feb 19 00:40:11 crc kubenswrapper[5108]: I0219 00:40:11.214059 5108 scope.go:117] "RemoveContainer" containerID="ef57f2e764d47f7c277d3731ef11c36e87f41ba35b7899e45550ab2d58242a03" Feb 19 00:40:15 crc kubenswrapper[5108]: I0219 00:40:15.849270 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:40:15 crc kubenswrapper[5108]: I0219 00:40:15.905986 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jtkpm/must-gather-dlbjv"] Feb 19 00:40:15 crc kubenswrapper[5108]: I0219 00:40:15.906585 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerName="copy" containerID="cri-o://8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760" gracePeriod=2 Feb 19 00:40:15 crc kubenswrapper[5108]: I0219 00:40:15.908871 5108 status_manager.go:895] "Failed to get status for pod" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" err="pods \"must-gather-dlbjv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-jtkpm\": no relationship found between node 'crc' and this object" Feb 19 00:40:15 crc kubenswrapper[5108]: I0219 00:40:15.926316 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jtkpm/must-gather-dlbjv"] Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.319031 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jtkpm_must-gather-dlbjv_f0bfc62a-5e28-491b-9599-9fbd7ed02100/copy/0.log" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.319752 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.321371 5108 status_manager.go:895] "Failed to get status for pod" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" err="pods \"must-gather-dlbjv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-jtkpm\": no relationship found between node 'crc' and this object" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.379384 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jtkpm_must-gather-dlbjv_f0bfc62a-5e28-491b-9599-9fbd7ed02100/copy/0.log" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.379731 5108 generic.go:358] "Generic (PLEG): container finished" podID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerID="8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760" exitCode=143 Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.379811 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.379891 5108 scope.go:117] "RemoveContainer" containerID="8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.381259 5108 status_manager.go:895] "Failed to get status for pod" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" err="pods \"must-gather-dlbjv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-jtkpm\": no relationship found between node 'crc' and this object" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.382879 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"ac366c7acce34b7b884411c045d654a9edd8d5347800497a34ed707dd6cb4854"} Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.385171 5108 status_manager.go:895] "Failed to get status for pod" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" err="pods \"must-gather-dlbjv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-jtkpm\": no relationship found between node 'crc' and this object" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.401153 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f0bfc62a-5e28-491b-9599-9fbd7ed02100-must-gather-output\") pod \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\" (UID: \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\") " Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.401329 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf8qn\" (UniqueName: \"kubernetes.io/projected/f0bfc62a-5e28-491b-9599-9fbd7ed02100-kube-api-access-pf8qn\") pod \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\" (UID: \"f0bfc62a-5e28-491b-9599-9fbd7ed02100\") " Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.409568 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bfc62a-5e28-491b-9599-9fbd7ed02100-kube-api-access-pf8qn" (OuterVolumeSpecName: "kube-api-access-pf8qn") pod "f0bfc62a-5e28-491b-9599-9fbd7ed02100" (UID: "f0bfc62a-5e28-491b-9599-9fbd7ed02100"). InnerVolumeSpecName "kube-api-access-pf8qn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.414731 5108 scope.go:117] "RemoveContainer" containerID="b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.476479 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0bfc62a-5e28-491b-9599-9fbd7ed02100-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f0bfc62a-5e28-491b-9599-9fbd7ed02100" (UID: "f0bfc62a-5e28-491b-9599-9fbd7ed02100"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.503506 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pf8qn\" (UniqueName: \"kubernetes.io/projected/f0bfc62a-5e28-491b-9599-9fbd7ed02100-kube-api-access-pf8qn\") on node \"crc\" DevicePath \"\"" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.503537 5108 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f0bfc62a-5e28-491b-9599-9fbd7ed02100-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.505981 5108 scope.go:117] "RemoveContainer" containerID="8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760" Feb 19 00:40:16 crc kubenswrapper[5108]: E0219 00:40:16.506771 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760\": container with ID starting with 8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760 not found: ID does not exist" containerID="8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.506813 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760"} err="failed to get container status \"8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760\": rpc error: code = NotFound desc = could not find container \"8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760\": container with ID starting with 8786ab84f748312cfa03bdaad4d956496b1ce0936507bf0c64b013b89a6b9760 not found: ID does not exist" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.506840 5108 scope.go:117] "RemoveContainer" containerID="b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7" Feb 19 00:40:16 crc kubenswrapper[5108]: E0219 00:40:16.507141 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7\": container with ID starting with b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7 not found: ID does not exist" containerID="b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.507171 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7"} err="failed to get container status \"b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7\": rpc error: code = NotFound desc = could not find container \"b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7\": container with ID starting with b742563355141cef2272d0484871229dc1cc261ab92c49d899dc90a41994b7b7 not found: ID does not exist" Feb 19 00:40:16 crc kubenswrapper[5108]: I0219 00:40:16.702165 5108 status_manager.go:895] "Failed to get status for pod" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" pod="openshift-must-gather-jtkpm/must-gather-dlbjv" err="pods \"must-gather-dlbjv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-jtkpm\": no relationship found between node 'crc' and this object" Feb 19 00:40:17 crc kubenswrapper[5108]: I0219 00:40:17.864640 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" path="/var/lib/kubelet/pods/f0bfc62a-5e28-491b-9599-9fbd7ed02100/volumes" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.159364 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524362-np28p"] Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.161774 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerName="gather" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.161811 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerName="gather" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.161873 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerName="copy" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.161889 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerName="copy" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.161925 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1def353e-cbab-4162-987a-6a8444e12df1" containerName="oc" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.161978 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1def353e-cbab-4162-987a-6a8444e12df1" containerName="oc" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.162340 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerName="gather" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.162367 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f0bfc62a-5e28-491b-9599-9fbd7ed02100" containerName="copy" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.162402 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1def353e-cbab-4162-987a-6a8444e12df1" containerName="oc" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.175731 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524362-np28p"] Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.175917 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524362-np28p" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.180090 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.181099 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.181615 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.241351 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kcbc\" (UniqueName: \"kubernetes.io/projected/cf16ae82-e360-442f-b643-bd4f0a399c5d-kube-api-access-5kcbc\") pod \"auto-csr-approver-29524362-np28p\" (UID: \"cf16ae82-e360-442f-b643-bd4f0a399c5d\") " pod="openshift-infra/auto-csr-approver-29524362-np28p" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.343661 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcbc\" (UniqueName: \"kubernetes.io/projected/cf16ae82-e360-442f-b643-bd4f0a399c5d-kube-api-access-5kcbc\") pod \"auto-csr-approver-29524362-np28p\" (UID: \"cf16ae82-e360-442f-b643-bd4f0a399c5d\") " pod="openshift-infra/auto-csr-approver-29524362-np28p" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.377868 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcbc\" (UniqueName: \"kubernetes.io/projected/cf16ae82-e360-442f-b643-bd4f0a399c5d-kube-api-access-5kcbc\") pod \"auto-csr-approver-29524362-np28p\" (UID: \"cf16ae82-e360-442f-b643-bd4f0a399c5d\") " pod="openshift-infra/auto-csr-approver-29524362-np28p" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.502361 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524362-np28p" Feb 19 00:42:00 crc kubenswrapper[5108]: I0219 00:42:00.800388 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524362-np28p"] Feb 19 00:42:01 crc kubenswrapper[5108]: I0219 00:42:01.592956 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524362-np28p" event={"ID":"cf16ae82-e360-442f-b643-bd4f0a399c5d","Type":"ContainerStarted","Data":"2db2d365c7d9115728f7fc03fdd5f945a05e5a12741fb64fd3bcffbd0e85d7bf"} Feb 19 00:42:02 crc kubenswrapper[5108]: I0219 00:42:02.606310 5108 generic.go:358] "Generic (PLEG): container finished" podID="cf16ae82-e360-442f-b643-bd4f0a399c5d" containerID="b2be878e7a8cd301c8b6dd53395efc9543ac4a27bc2a27f09b8ea732864f3f87" exitCode=0 Feb 19 00:42:02 crc kubenswrapper[5108]: I0219 00:42:02.606595 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524362-np28p" event={"ID":"cf16ae82-e360-442f-b643-bd4f0a399c5d","Type":"ContainerDied","Data":"b2be878e7a8cd301c8b6dd53395efc9543ac4a27bc2a27f09b8ea732864f3f87"} Feb 19 00:42:03 crc kubenswrapper[5108]: I0219 00:42:03.988393 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524362-np28p" Feb 19 00:42:04 crc kubenswrapper[5108]: I0219 00:42:04.114233 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kcbc\" (UniqueName: \"kubernetes.io/projected/cf16ae82-e360-442f-b643-bd4f0a399c5d-kube-api-access-5kcbc\") pod \"cf16ae82-e360-442f-b643-bd4f0a399c5d\" (UID: \"cf16ae82-e360-442f-b643-bd4f0a399c5d\") " Feb 19 00:42:04 crc kubenswrapper[5108]: I0219 00:42:04.124727 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf16ae82-e360-442f-b643-bd4f0a399c5d-kube-api-access-5kcbc" (OuterVolumeSpecName: "kube-api-access-5kcbc") pod "cf16ae82-e360-442f-b643-bd4f0a399c5d" (UID: "cf16ae82-e360-442f-b643-bd4f0a399c5d"). InnerVolumeSpecName "kube-api-access-5kcbc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:42:04 crc kubenswrapper[5108]: I0219 00:42:04.216965 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5kcbc\" (UniqueName: \"kubernetes.io/projected/cf16ae82-e360-442f-b643-bd4f0a399c5d-kube-api-access-5kcbc\") on node \"crc\" DevicePath \"\"" Feb 19 00:42:04 crc kubenswrapper[5108]: I0219 00:42:04.629186 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524362-np28p" event={"ID":"cf16ae82-e360-442f-b643-bd4f0a399c5d","Type":"ContainerDied","Data":"2db2d365c7d9115728f7fc03fdd5f945a05e5a12741fb64fd3bcffbd0e85d7bf"} Feb 19 00:42:04 crc kubenswrapper[5108]: I0219 00:42:04.629861 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2db2d365c7d9115728f7fc03fdd5f945a05e5a12741fb64fd3bcffbd0e85d7bf" Feb 19 00:42:04 crc kubenswrapper[5108]: I0219 00:42:04.629315 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524362-np28p" Feb 19 00:42:05 crc kubenswrapper[5108]: I0219 00:42:05.085798 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524356-wkdgx"] Feb 19 00:42:05 crc kubenswrapper[5108]: I0219 00:42:05.096078 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524356-wkdgx"] Feb 19 00:42:05 crc kubenswrapper[5108]: I0219 00:42:05.863713 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f2fdf3-00d2-4230-8643-56af472eab11" path="/var/lib/kubelet/pods/01f2fdf3-00d2-4230-8643-56af472eab11/volumes" Feb 19 00:42:11 crc kubenswrapper[5108]: I0219 00:42:11.367573 5108 scope.go:117] "RemoveContainer" containerID="30c25d428721befe30a445b0da52df970afd616793d06e890d6aa9d3c2ad6288" Feb 19 00:42:36 crc kubenswrapper[5108]: I0219 00:42:36.145615 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:42:36 crc kubenswrapper[5108]: I0219 00:42:36.146468 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:43:06 crc kubenswrapper[5108]: I0219 00:43:06.145586 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:43:06 crc kubenswrapper[5108]: I0219 00:43:06.146070 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.145535 5108 patch_prober.go:28] interesting pod/machine-config-daemon-k5zp6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.146274 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.146357 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.147400 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac366c7acce34b7b884411c045d654a9edd8d5347800497a34ed707dd6cb4854"} pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.147692 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" podUID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerName="machine-config-daemon" containerID="cri-o://ac366c7acce34b7b884411c045d654a9edd8d5347800497a34ed707dd6cb4854" gracePeriod=600 Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.285213 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.588442 5108 generic.go:358] "Generic (PLEG): container finished" podID="995cb3be-1541-4090-83fe-8bf1a8259f0d" containerID="ac366c7acce34b7b884411c045d654a9edd8d5347800497a34ed707dd6cb4854" exitCode=0 Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.588919 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerDied","Data":"ac366c7acce34b7b884411c045d654a9edd8d5347800497a34ed707dd6cb4854"} Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.589080 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5zp6" event={"ID":"995cb3be-1541-4090-83fe-8bf1a8259f0d","Type":"ContainerStarted","Data":"15e699ffc1baf758afc0c093c506c53907730b06ae95df09e4279eb6dc6be9cd"} Feb 19 00:43:36 crc kubenswrapper[5108]: I0219 00:43:36.589123 5108 scope.go:117] "RemoveContainer" containerID="3b540a7fba44d7be85a107c2255f7d2bb72198bd989c4a7f79166c02e82640a4" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.093952 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-2jrjl"] Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.097287 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf16ae82-e360-442f-b643-bd4f0a399c5d" containerName="oc" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.097315 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf16ae82-e360-442f-b643-bd4f0a399c5d" containerName="oc" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.097868 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf16ae82-e360-442f-b643-bd4f0a399c5d" containerName="oc" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.103498 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.114717 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2jrjl"] Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.189275 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zr5j\" (UniqueName: \"kubernetes.io/projected/fea4c399-ad15-482d-8a2b-dc5262755cf1-kube-api-access-9zr5j\") pod \"infrawatch-operators-2jrjl\" (UID: \"fea4c399-ad15-482d-8a2b-dc5262755cf1\") " pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.290832 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zr5j\" (UniqueName: \"kubernetes.io/projected/fea4c399-ad15-482d-8a2b-dc5262755cf1-kube-api-access-9zr5j\") pod \"infrawatch-operators-2jrjl\" (UID: \"fea4c399-ad15-482d-8a2b-dc5262755cf1\") " pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.310022 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zr5j\" (UniqueName: \"kubernetes.io/projected/fea4c399-ad15-482d-8a2b-dc5262755cf1-kube-api-access-9zr5j\") pod \"infrawatch-operators-2jrjl\" (UID: \"fea4c399-ad15-482d-8a2b-dc5262755cf1\") " pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.441315 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:41 crc kubenswrapper[5108]: I0219 00:43:41.886839 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2jrjl"] Feb 19 00:43:41 crc kubenswrapper[5108]: W0219 00:43:41.897157 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea4c399_ad15_482d_8a2b_dc5262755cf1.slice/crio-b4d667539990ebe5ea040b6a45dcaf32485ff83517753429e6344d2a6444d847 WatchSource:0}: Error finding container b4d667539990ebe5ea040b6a45dcaf32485ff83517753429e6344d2a6444d847: Status 404 returned error can't find the container with id b4d667539990ebe5ea040b6a45dcaf32485ff83517753429e6344d2a6444d847 Feb 19 00:43:42 crc kubenswrapper[5108]: I0219 00:43:42.658661 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2jrjl" event={"ID":"fea4c399-ad15-482d-8a2b-dc5262755cf1","Type":"ContainerStarted","Data":"b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a"} Feb 19 00:43:42 crc kubenswrapper[5108]: I0219 00:43:42.658753 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2jrjl" event={"ID":"fea4c399-ad15-482d-8a2b-dc5262755cf1","Type":"ContainerStarted","Data":"b4d667539990ebe5ea040b6a45dcaf32485ff83517753429e6344d2a6444d847"} Feb 19 00:43:42 crc kubenswrapper[5108]: I0219 00:43:42.689213 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-2jrjl" podStartSLOduration=1.572312844 podStartE2EDuration="1.689187665s" podCreationTimestamp="2026-02-19 00:43:41 +0000 UTC" firstStartedPulling="2026-02-19 00:43:41.899048891 +0000 UTC m=+2080.865695229" lastFinishedPulling="2026-02-19 00:43:42.015923742 +0000 UTC m=+2080.982570050" observedRunningTime="2026-02-19 00:43:42.67828033 +0000 UTC m=+2081.644926678" watchObservedRunningTime="2026-02-19 00:43:42.689187665 +0000 UTC m=+2081.655833993" Feb 19 00:43:51 crc kubenswrapper[5108]: I0219 00:43:51.442585 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:51 crc kubenswrapper[5108]: I0219 00:43:51.443388 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:51 crc kubenswrapper[5108]: I0219 00:43:51.504314 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:51 crc kubenswrapper[5108]: I0219 00:43:51.815676 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:51 crc kubenswrapper[5108]: I0219 00:43:51.866547 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2jrjl"] Feb 19 00:43:53 crc kubenswrapper[5108]: I0219 00:43:53.790037 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-2jrjl" podUID="fea4c399-ad15-482d-8a2b-dc5262755cf1" containerName="registry-server" containerID="cri-o://b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a" gracePeriod=2 Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.216500 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.251595 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zr5j\" (UniqueName: \"kubernetes.io/projected/fea4c399-ad15-482d-8a2b-dc5262755cf1-kube-api-access-9zr5j\") pod \"fea4c399-ad15-482d-8a2b-dc5262755cf1\" (UID: \"fea4c399-ad15-482d-8a2b-dc5262755cf1\") " Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.259331 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea4c399-ad15-482d-8a2b-dc5262755cf1-kube-api-access-9zr5j" (OuterVolumeSpecName: "kube-api-access-9zr5j") pod "fea4c399-ad15-482d-8a2b-dc5262755cf1" (UID: "fea4c399-ad15-482d-8a2b-dc5262755cf1"). InnerVolumeSpecName "kube-api-access-9zr5j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.353500 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9zr5j\" (UniqueName: \"kubernetes.io/projected/fea4c399-ad15-482d-8a2b-dc5262755cf1-kube-api-access-9zr5j\") on node \"crc\" DevicePath \"\"" Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.801196 5108 generic.go:358] "Generic (PLEG): container finished" podID="fea4c399-ad15-482d-8a2b-dc5262755cf1" containerID="b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a" exitCode=0 Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.801265 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2jrjl" event={"ID":"fea4c399-ad15-482d-8a2b-dc5262755cf1","Type":"ContainerDied","Data":"b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a"} Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.801341 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2jrjl" event={"ID":"fea4c399-ad15-482d-8a2b-dc5262755cf1","Type":"ContainerDied","Data":"b4d667539990ebe5ea040b6a45dcaf32485ff83517753429e6344d2a6444d847"} Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.801344 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2jrjl" Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.801448 5108 scope.go:117] "RemoveContainer" containerID="b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a" Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.843036 5108 scope.go:117] "RemoveContainer" containerID="b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a" Feb 19 00:43:54 crc kubenswrapper[5108]: E0219 00:43:54.843760 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a\": container with ID starting with b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a not found: ID does not exist" containerID="b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a" Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.843820 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a"} err="failed to get container status \"b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a\": rpc error: code = NotFound desc = could not find container \"b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a\": container with ID starting with b439011b9cc86297b5cb348e0d1279ce6f85d54c2201c87f84e3e84325713a7a not found: ID does not exist" Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.884526 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2jrjl"] Feb 19 00:43:54 crc kubenswrapper[5108]: I0219 00:43:54.893777 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-2jrjl"] Feb 19 00:43:55 crc kubenswrapper[5108]: I0219 00:43:55.859674 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea4c399-ad15-482d-8a2b-dc5262755cf1" path="/var/lib/kubelet/pods/fea4c399-ad15-482d-8a2b-dc5262755cf1/volumes" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.156680 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524364-dlvfw"] Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.158706 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fea4c399-ad15-482d-8a2b-dc5262755cf1" containerName="registry-server" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.158734 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea4c399-ad15-482d-8a2b-dc5262755cf1" containerName="registry-server" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.159007 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="fea4c399-ad15-482d-8a2b-dc5262755cf1" containerName="registry-server" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.169578 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524364-dlvfw" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.170518 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524364-dlvfw"] Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.176317 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.176359 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.176334 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lh5mf\"" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.279667 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mfs\" (UniqueName: \"kubernetes.io/projected/d79ef22b-d991-41af-9f1e-e787a4d8cef6-kube-api-access-v6mfs\") pod \"auto-csr-approver-29524364-dlvfw\" (UID: \"d79ef22b-d991-41af-9f1e-e787a4d8cef6\") " pod="openshift-infra/auto-csr-approver-29524364-dlvfw" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.381490 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mfs\" (UniqueName: \"kubernetes.io/projected/d79ef22b-d991-41af-9f1e-e787a4d8cef6-kube-api-access-v6mfs\") pod \"auto-csr-approver-29524364-dlvfw\" (UID: \"d79ef22b-d991-41af-9f1e-e787a4d8cef6\") " pod="openshift-infra/auto-csr-approver-29524364-dlvfw" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.412250 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mfs\" (UniqueName: \"kubernetes.io/projected/d79ef22b-d991-41af-9f1e-e787a4d8cef6-kube-api-access-v6mfs\") pod \"auto-csr-approver-29524364-dlvfw\" (UID: \"d79ef22b-d991-41af-9f1e-e787a4d8cef6\") " pod="openshift-infra/auto-csr-approver-29524364-dlvfw" Feb 19 00:44:00 crc kubenswrapper[5108]: I0219 00:44:00.503455 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524364-dlvfw" Feb 19 00:44:01 crc kubenswrapper[5108]: I0219 00:44:01.009695 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524364-dlvfw"] Feb 19 00:44:01 crc kubenswrapper[5108]: I0219 00:44:01.878256 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524364-dlvfw" event={"ID":"d79ef22b-d991-41af-9f1e-e787a4d8cef6","Type":"ContainerStarted","Data":"55d6db87e6142fed7f1710e2ca5150936c5e9ccdb0b00fbcff3ee0d6e31292c7"} Feb 19 00:44:02 crc kubenswrapper[5108]: I0219 00:44:02.778537 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:44:02 crc kubenswrapper[5108]: I0219 00:44:02.781072 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v42mj_c8ba935e-bb01-466a-8b94-8b0c15e535b1/kube-multus/0.log" Feb 19 00:44:02 crc kubenswrapper[5108]: I0219 00:44:02.790241 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:44:02 crc kubenswrapper[5108]: I0219 00:44:02.792356 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Feb 19 00:44:02 crc kubenswrapper[5108]: I0219 00:44:02.884767 5108 generic.go:358] "Generic (PLEG): container finished" podID="d79ef22b-d991-41af-9f1e-e787a4d8cef6" containerID="d6c5dfce7327bfce03c0932ff4d9c0ac7f44088a905d967df1ead951d53ae7e5" exitCode=0 Feb 19 00:44:02 crc kubenswrapper[5108]: I0219 00:44:02.884884 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524364-dlvfw" event={"ID":"d79ef22b-d991-41af-9f1e-e787a4d8cef6","Type":"ContainerDied","Data":"d6c5dfce7327bfce03c0932ff4d9c0ac7f44088a905d967df1ead951d53ae7e5"} Feb 19 00:44:04 crc kubenswrapper[5108]: I0219 00:44:04.234005 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524364-dlvfw" Feb 19 00:44:04 crc kubenswrapper[5108]: I0219 00:44:04.347251 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6mfs\" (UniqueName: \"kubernetes.io/projected/d79ef22b-d991-41af-9f1e-e787a4d8cef6-kube-api-access-v6mfs\") pod \"d79ef22b-d991-41af-9f1e-e787a4d8cef6\" (UID: \"d79ef22b-d991-41af-9f1e-e787a4d8cef6\") " Feb 19 00:44:04 crc kubenswrapper[5108]: I0219 00:44:04.357687 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79ef22b-d991-41af-9f1e-e787a4d8cef6-kube-api-access-v6mfs" (OuterVolumeSpecName: "kube-api-access-v6mfs") pod "d79ef22b-d991-41af-9f1e-e787a4d8cef6" (UID: "d79ef22b-d991-41af-9f1e-e787a4d8cef6"). InnerVolumeSpecName "kube-api-access-v6mfs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:44:04 crc kubenswrapper[5108]: I0219 00:44:04.450017 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v6mfs\" (UniqueName: \"kubernetes.io/projected/d79ef22b-d991-41af-9f1e-e787a4d8cef6-kube-api-access-v6mfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:44:04 crc kubenswrapper[5108]: I0219 00:44:04.907785 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524364-dlvfw" event={"ID":"d79ef22b-d991-41af-9f1e-e787a4d8cef6","Type":"ContainerDied","Data":"55d6db87e6142fed7f1710e2ca5150936c5e9ccdb0b00fbcff3ee0d6e31292c7"} Feb 19 00:44:04 crc kubenswrapper[5108]: I0219 00:44:04.908288 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d6db87e6142fed7f1710e2ca5150936c5e9ccdb0b00fbcff3ee0d6e31292c7" Feb 19 00:44:04 crc kubenswrapper[5108]: I0219 00:44:04.907805 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524364-dlvfw" Feb 19 00:44:05 crc kubenswrapper[5108]: I0219 00:44:05.333548 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524358-s6vvf"] Feb 19 00:44:05 crc kubenswrapper[5108]: I0219 00:44:05.343303 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524358-s6vvf"] Feb 19 00:44:05 crc kubenswrapper[5108]: I0219 00:44:05.856659 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33fd46e-730f-4f8c-8bed-de7ab778e10c" path="/var/lib/kubelet/pods/c33fd46e-730f-4f8c-8bed-de7ab778e10c/volumes" Feb 19 00:44:11 crc kubenswrapper[5108]: I0219 00:44:11.517383 5108 scope.go:117] "RemoveContainer" containerID="9b1753a0dd40adb51c4b5171ebd7359a1113a16afa40a31e7cf0919730580e46"